this post was submitted on 03 Mar 2025
790 points (99.4% liked)
Technology
63746 readers
3584 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
And is that different from the way you make decisions, fundamentally?
I don't think I run on AMD or Intel, so uh, yes.
I didn't say anything about either.
Idk, that's still an area of active research. I versatile certainly think it's very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.
My understanding is that they're fundamentally different processes, but since we don't understand brains perfectly, maybe we happened on an accurate model. Probably not, but maybe.