this post was submitted on 27 May 2024
1105 points (98.3% liked)

Technology

58303 readers
17 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 223 points 5 months ago (3 children)

In the interest of transparency, I don't know if this guy is telling the truth, but it feels very plausible.

[–] [email protected] 126 points 5 months ago (11 children)

It seems like the entire industry is in pure panic about AI, not just Google. Everyone hopes that LLMs will end years of homeopathic growth through iteration of long-existing technology, which is why it attracts tons of venture capital.

Google, which sits where IBM was decades ago, is too big, too corporate and too slow now, so they needed years to react to this fad. When they finally did, all they were able to come up with was a rushed equivalent of existing LLMs that suffers from all of the same problems.

[–] [email protected] 59 points 5 months ago (1 children)

They all hope it'll end years of having to pay employees.

load more comments (1 replies)
[–] [email protected] 53 points 5 months ago (2 children)

I think this is what happens to every company once all the smart / creative people have gone. All you have left are the "line must always go up" business idiots who don't understand what their company does or know how to make it work.

load more comments (2 replies)
[–] [email protected] 27 points 5 months ago

Just want to say that homeopathic growth is both hilarious and perfectly adequate description of what modern tech industry is.

load more comments (8 replies)
load more comments (2 replies)
[–] [email protected] 145 points 5 months ago (4 children)

The solution to the problem is to just pull the plug on the AI search bullshit until it is actually helpful.

[–] [email protected] 46 points 5 months ago (1 children)

Absolutely this. Microsoft is going headlong into the AI abyss. Google should be the company that calls it out and says "No, we value the correctness of our search results too much".

It would obviously be a bullshit statement at this point after a decade of adverts corrupting their value, but that's what they should be about.

[–] [email protected] 26 points 5 months ago (1 children)

Don't count on it, the head of search does not care for anything but profit, it was the same guy who drove yahoo into the ground

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 132 points 5 months ago (1 children)

Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

[–] [email protected] 44 points 5 months ago* (last edited 5 months ago) (16 children)

I don't like the sound of getting on with "productive uses" either though. I hope the entire thing is a catastrophic failure.

load more comments (16 replies)
[–] [email protected] 94 points 5 months ago (4 children)

If you can't fix it, then get rid of it, and don't bring it back until we reach a time when it's good enough to not cause egregious problems (which is never, so basically don't ever think about using your silly Gemini thing in your products ever again)

load more comments (4 replies)
[–] [email protected] 82 points 5 months ago (14 children)

Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

Misinformation is literally the first line of defense for them.

[–] [email protected] 34 points 5 months ago (8 children)

But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

load more comments (8 replies)
load more comments (13 replies)
[–] [email protected] 78 points 5 months ago (5 children)

Here's a solution: don't make AI provide the results. Let humans answer each other's questions like in the good old days.

[–] [email protected] 36 points 5 months ago (2 children)

Whatever happened to Jeeves? He seemed like a good guy. He probably burned out.

[–] [email protected] 27 points 5 months ago (2 children)

You can find him walking Lycos around Geocities picking up it's poop in little green plastic bags.

load more comments (2 replies)
load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 76 points 5 months ago (1 children)

Has No Solution for Its AI Providing Wildly Incorrect Information

Don't use it??????

AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there's enough flat out wrong stuff in its data there's a growing chance of it screwing it up.

load more comments (1 replies)
[–] [email protected] 74 points 5 months ago (3 children)

Wow, in the 2000's and 2010's google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place. In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.

Now they just blindly riding the AI hype train, because "everyone else is doing AI".

load more comments (3 replies)
[–] [email protected] 69 points 5 months ago (3 children)

and our parents told us Wikipedia couldn't be trusted....

[–] [email protected] 25 points 5 months ago

Huh. That made me stop and realize how long I've been around. Wikipedia still feels like a new addition to society to me, even though I've been using it for around 20 years now.

And what you said, is something I've cautioned my daughter about, and first said that to her about ten years ago.

load more comments (2 replies)
[–] [email protected] 66 points 5 months ago (1 children)

Replace the CEO with an AI. They're both good at lying and telling people what they want to hear, until they get caught

load more comments (1 replies)
[–] [email protected] 65 points 5 months ago (2 children)

"It's broken in horrible, dangerous ways, and we're gonna keep doing it. Fuck you."

load more comments (2 replies)
[–] namingthingsiseasy 61 points 5 months ago (4 children)

The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they're just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it's pretty hilarious watching them flail.

You can really see that consulting background of his doing its work. It's actually kinda poetic because now he'll get a chance to see what actually happens to companies that do business with McKinsey.

load more comments (4 replies)
[–] [email protected] 59 points 5 months ago (2 children)

Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps

load more comments (2 replies)
[–] [email protected] 53 points 5 months ago (7 children)

Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.

load more comments (7 replies)
[–] [email protected] 51 points 5 months ago (8 children)

these hallucinations are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

Then what made you think it’s a good idea to include that in your product now?!

load more comments (8 replies)
[–] [email protected] 50 points 5 months ago* (last edited 5 months ago) (3 children)

So if a car maker releases a car model that randomly turns abruptly to the left for no apparent reason, you simply say "I can't fix it, deal with it"? No, you pull it out of the market, try to fix it and, if this it is not possible, then you retire the model before it kills anyone.

[–] [email protected] 27 points 5 months ago

I bet if there weren't angencies forcing them to do this they wouldn't recall.

load more comments (2 replies)
[–] [email protected] 49 points 5 months ago (2 children)

If you train your AI to sound right, your AI will excel at sounding right. The primary goal of LLMs is to sound right, not to be correct.

load more comments (2 replies)
[–] [email protected] 46 points 5 months ago (7 children)

Media needs to stop calling this AI. There is no intelligence here.

The content generator models know how to put probabilistic tokens together. It has no ability to reason.

It is a currently unsolvable problem to evaluate text to determine if it's factual..until we have artificial general intelligence.

AI will not be able to act like real AI until we solve real AI. That is the currently open problem.

load more comments (7 replies)
[–] [email protected] 43 points 5 months ago (2 children)

This is so wild to me... as a software engineer, if my software doesn't work 100% of the time as requested in the specification, it fails tests, doesn't get released and I get told to fix all issues before going live.

AI is basically another word for unrealiable software full of bugs.

load more comments (2 replies)
[–] [email protected] 40 points 5 months ago (3 children)

Have they tried not using it? 🤦

load more comments (3 replies)
[–] [email protected] 39 points 5 months ago (1 children)

I mean they could disable it until it works, else it's knowingly misleading people

[–] [email protected] 33 points 5 months ago

Obviously you don't have a business degree.

[–] [email protected] 38 points 5 months ago (5 children)

How about stop forcing it on us?

load more comments (5 replies)
[–] [email protected] 38 points 5 months ago

Google CEO essentially says the first result should not be trusted.

[–] [email protected] 37 points 5 months ago (47 children)

TBH this is surprisingly honest.

load more comments (47 replies)
[–] [email protected] 37 points 5 months ago

Maybe if you can't get it to be accurate you shouldn't be trying to insert it into everything.

[–] [email protected] 36 points 5 months ago

The answer is dont inflate your stock price by cramming the latest tech du jour in to your flagship product... but we all know thats not an option.

[–] [email protected] 36 points 5 months ago* (last edited 5 months ago) (3 children)

This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of "Didn't stop to think if they should" and it's going to cause a lot of problems for humanity.

load more comments (3 replies)
[–] [email protected] 34 points 5 months ago (10 children)

I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

load more comments (10 replies)
[–] [email protected] 30 points 5 months ago (6 children)

I have a solution: stop using their search engine to begin with and slowly replace everything else google you use.

load more comments (6 replies)
[–] [email protected] 29 points 5 months ago (1 children)

There is apparently no limit to calling a bug a feature

load more comments (1 replies)
[–] [email protected] 28 points 5 months ago

I know an easy fix. Just don't do ai.

[–] [email protected] 28 points 5 months ago (1 children)

The model literally ate The Onion, and now they can't get it to throw it back up.

load more comments (1 replies)
[–] [email protected] 27 points 5 months ago* (last edited 5 months ago) (1 children)

They polluted their model with the sewage of the Internet.

The only worse thing they could have done is base their entire LLM dataset on 4chan.

load more comments (1 replies)
[–] [email protected] 25 points 5 months ago (5 children)

So you have a product that you've made into a system for getting answers. And then you couldn't be bothered to try and sanitize training data enough to get your answer system's new headline feature from spreading blatantly incorrect information? If it doesn't work, maybe don't ship it.

load more comments (5 replies)
[–] [email protected] 25 points 5 months ago (1 children)

Then it sounds like the "web" tab should be the default and the AI Overview should be the optional tab the user has to choose to go click on.

load more comments (1 replies)
[–] [email protected] 25 points 5 months ago* (last edited 5 months ago)

That's ok, we were already used to not getting what we wanted from your search and are already working on replacing you since you opted to replace yourselves with advertising instead of information, the role you were supposed to fulfill which you betrayed.

die in ignominy. Open source is the only way forward.

load more comments
view more: next ›