this post was submitted on 25 May 2025
501 points (98.8% liked)

Technology

70528 readers
3583 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

I'm quite aware that it's less likely to technically hallucinate in these cases. But focusing on that technicality doesn't serve users well.

These (interesting and useful) use cases do not address the core issue that the query was written by the LLM, without expert oversight, which still leads to situations that are effectively halucinations.

Technically, it is returning a "correct" direct answer to a question that no rational actor would ever have asked.

But when a halucinated (correct looking but deeply flawed) query is sent to the system of record, it's most honest to still call the results a halucination, as well. Even though they are technically real data, just astonishingly poorly chosen real data.

The meaningless, correct-looking and wrong result for the end user is still just going to be called a halucination, by common folks.

For common usage, it's important not to promise end users that these scenarios are free of halucination.

You and I understand that technically, they're not getting back a halucination, just an answer to a bad question.

But for the end user to understand how to use the tool safely, they still need to know that a meaningless correct looking and wrong answer is still possible (and today, still also likely).