this post was submitted on 17 May 2024
502 points (95.0% liked)
Technology
58303 readers
60 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why do tech journalists keep using the businesses' language about AI, such as "hallucination", instead of glitching/bugging/breaking?
Ty. As soon as I saw the headline, I knew I wouldn't be finding value in the article.
It's not a bad article, honestly, I'm just tired of journalists and academics echoing the language of businesses and their marketing. "Hallucinations" aren't accurate for this form of AI. These are sophisticated generative text tools, and in my opinion lack any qualities that justify all this fluff terminology personifying them.
Also frankly, I think students have one of the better applications for large-language model AIs than many adults, even those trying to deploy them. Students are using them to do their homework, to generate their papers, exactly one of the basic points of them. Too many adults are acting like these tools should be used in their present form as research aids, but the entire generative basis of them undermines their reliability for this. It's trying to use the wrong tool for the job.
You don't want any of the generative capacities of a large-language model AI for research help, you'd instead want whatever text-processing it may be able to do to assemble and provide accurate output.