this post was submitted on 29 Jan 2025
961 points (98.9% liked)
Technology
61227 readers
4166 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You made me look ridiculously stupid and rightfully so. Actually, I take that back, I made myself look stupid and you made it obvious as it gets! Thanks for the wake up call
If I understand correctly, the model is in a way a dictionary of questions with responses, where the journey of figuring out the response is skipped. As in, the answer for the question "What's the point of existence" is "42", but it doesn't contain the thinking process that lead to this result.
If that's so, then wouldn't it be especially prone to hallucinations? I don't imagine it would respond adequately to the third "why?" in the row.
You kind of get it, it's not really a dictionary, it's more like a set of steps to transform noise that is tinted with your data, into more coherent data. Pass this input through a series of valves that are all open a different amount.
If we set the valves just perfectly, the output will kind of look like what we want it to.
Yes, LLMs are prone to hallucinations, which isn't always actually a bad thing, it's only bad if you are trying to do things that you need 100% accuracy for, like specific math.
I recommend 3blue1browns videos on LLMs for a nice introduction into how they actually work.