this post was submitted on 27 May 2024
1105 points (98.3% liked)

Technology

58303 readers
16 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 5 months ago (3 children)

If I was in charge, Gemini would be immediately tabled.

load more comments (3 replies)
[–] [email protected] 11 points 5 months ago

Heres one: SCRAP IT

[–] [email protected] 11 points 5 months ago (1 children)

The "solution" is to curate things, invest massive human resources in it, and ultimately still gets accused of tailoring the results and censoring stuff.

Let's put that toy back in the toy box, and keep it at the few things it can do well instead of trying to fix every non-broken things with it.

[–] [email protected] 10 points 5 months ago

The "solution" is to curate things, invest massive human resources in it

Hilariously, Google actually used to do this: they had a database called the "knowledge graph" that slowly accumulated verified information and relationships between commonly-queried entities, producing an excellent corpus of reliable, easy-to-find information about a large number of common topics.

Then they decided having people curate things was too expensive and gave up on it.

[–] [email protected] 10 points 5 months ago (1 children)

I've seen suggestions that the AI Overview is based on the top search results for the query, so the terrible answers may be more to do with Google Search just being bad than any issue with their AI. The AI Overview just makes things a bit worse by removing the context, so you can't see the glue on pizza suggestion was a joke on reddit or it was The Onion suggesting eating rocks.

load more comments (1 replies)
[–] [email protected] 10 points 5 months ago (2 children)

It's quite simple. Garbage in, garbage out. Data they use for training needs to be curated. How to curate the entire internet, I have no clue.

[–] [email protected] 9 points 5 months ago (3 children)

The real answer would be "don't". Have a decent whitelist dor training data with reliable data. Don't just add every orifice of the internet (like reddit) to the training data. Limitations would be good in this case.

[–] [email protected] 7 points 5 months ago (2 children)

Its worse than reddit, they've been pulling data from the onion.

load more comments (2 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 10 points 5 months ago (4 children)

Nothing is going to change until people die because of this shit.

[–] [email protected] 11 points 5 months ago

And to show everyone how sorry they are... free Google AI services for a year when you digitally sign this unrelated document.

[–] [email protected] 9 points 5 months ago* (last edited 5 months ago) (3 children)

Yep, better disclaimers are inevitable. When they call it a 'feature' it isn't getting fixed

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 9 points 5 months ago* (last edited 5 months ago)

I just realized that Trump beat them to the punch. Injecting cleaning solution into your body sounds exactly like something the AI Overview would suggest to combat COVID.

[–] [email protected] 8 points 5 months ago

"It's your responsibility to make sure our products aren't nonsense. All we want to do is to make money off you regardless."

[–] [email protected] 8 points 5 months ago* (last edited 5 months ago) (4 children)

Are they now AI, large language models or AI large language models?

[–] [email protected] 11 points 5 months ago

You ask a lot of questions for a bag of sentient meat.

load more comments (3 replies)
[–] [email protected] 8 points 5 months ago

What happens when you put a product manager in charge of a software company.

[–] [email protected] 8 points 5 months ago

Think I'll try that glue pizza. An odd taste choice, sure. But google wouldn't reccomend actually harmful things. They're the kings of search baby! They would have to be legally responsible as individuals for the millions of cases brought against them. They know that as rich people, they will face the harshest consequences! If anything went wrong, they'd find themselves in a.......STICKY situation!!!!

[–] [email protected] 8 points 5 months ago (1 children)

These models are mad libs machines. They just decide on the next word based on input and training. As such, there isn’t a solution to stopping hallucinations.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›