this post was submitted on 15 Aug 2024
351 points (99.4% liked)

Technology

58303 readers
12 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 37 points 3 months ago (1 children)

So an interesting thing about this is that the reasons Gemini sucks are... kind of entirely unrelated to LLM stuff. It's just a terrible assistant.

And I get the overlap there, it's probably hard to keep a LLM reined in enough to let it have access to a bunch of the stuff that Assistant did, maybe. But still, why Gemini is unable to take notes seems entirely unrelated to any AI crap, that's probably the top thing a chatbot should be great at. In fact, in things like those, related to just integrating a set of actions in an app, the LLM should just be the text parser. Assistant was already doing enough machine learning stuff to handle text commands, nothing there is fundamentally different.

So yeah, I'm confused by how much Gemini sucks at things that have nothing to do with its chatbotty stuff, and if Google is going to start phasing out Assistant I sure hope they fix those parts at least. I use Assistant for note taking almost exclusively (because frankly, who cares about interacting with your phone using voice for anything else, barring perhaps a quick search). Gemini has one job and zero reasons why it can't do it. And it still really can't do it.

[–] [email protected] 9 points 3 months ago (2 children)

LLMs on their own are not a viable replacement for assistants because you need a working assistant core to integrate with other services. LLM layer on top of assistants for better handling of natural language prompts is what I imagined would happen. What Gemini is doing seems ridiculous but I guess that's Google developing multiple competing products again.

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago)
  1. Convert voice to text.
  2. Pre-parse vs voice command library of commands. If there are, do them, pass confirmation and jump to 6.
  3. If no valid commands, then pass to LLM.
  4. have LLM heavily trained on commands and some API output for them. If none, then other responses
  5. have response checked for API outputs, handle them appropriately and send confirmation forward, otherwise pass on output.
  6. Convert to voice.

The LLM part obviously also needs all kinds of sanitation on both sides like they do now, but exact commands should preempt the LLM entirely, if you're insisting on using one.

[–] [email protected] 3 points 3 months ago

It is a replacement for a specific portion of a very complicated ecosystem-wide integration involving a ton of interoperability sandwiched between the natural language bits. Why this is a new product and not an Assistant overhaul is anybody's guess. Some blend of complicated technical issues and corporate politics, I bet.