this post was submitted on 03 Mar 2025
790 points (99.4% liked)
Technology
63746 readers
3584 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
And is that different from the way you make decisions, fundamentally?
I don't think I run on AMD or Intel, so uh, yes.
I didn't say anything about either.
Idk, that's still an area of active research. I versatile certainly think it's very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.
My understanding is that they're fundamentally different processes, but since we don't understand brains perfectly, maybe we happened on an accurate model. Probably not, but maybe.
It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.
It apologizes because math says that the most likely response is to apologize.
Edit: you can just ask it y'all
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor, I am a product (being) of my environment (locale), experience (input), and nurturing (programming).
/think.
What's the difference?
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
"I'm sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix."
We are nowhere near close to AGI.
Ask chatgpt, I'm done arguing effective consciousness vs actual consciousness.
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
@Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.
The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent