this post was submitted on 23 May 2024
953 points (100.0% liked)

TechTakes

1425 readers
322 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Source

I see Google's deal with Reddit is going just great...

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 6 months ago (2 children)

TBH I'm curious what the difference between this and "hallucinating" would be.

[–] [email protected] 6 points 6 months ago (1 children)

I think 'hallucinating' means when it makes up the source/idea by (effectively) word association that generates the concept, rather than here it's repeating a real source.

[–] [email protected] 6 points 6 months ago* (last edited 6 months ago) (1 children)

Couldn't that describe 95% of what LLMs?

It is a really good auto complete at the end of the day, just some times the auto complete gets it wrong

[–] [email protected] 3 points 6 months ago

Yes, nicely put! I suppose 'hallucinating' is a description of when, to the reader, it appears to state a fact but that fact doesn't at all represent any fact from the training data.

[–] [email protected] -3 points 6 months ago (2 children)

Well it's referencing something so the problem is the data set not an inherent flaw in the AI

[–] [email protected] 15 points 6 months ago (1 children)

i'm pretty sure that referencing this indicates an inherent flaw in the AI

[–] [email protected] 13 points 6 months ago* (last edited 6 months ago)

The inherent flaw is that the dataset needs to be both extremely large and vetted for quality with an extremely high level of accuracy. That can't realistically exist, and any technology that relies on something that can't exist is by definition flawed.