Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try [email protected]
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
We have to work out what intelligence is before we can develop AI. Sentient AI? Forget about it!
I think generally Sentience is considered a very low bar while Sapience better describes thinking on the level of a real person. I get them confused sometimes.
In the case of an LLM-type AI though, the bars can be swapped in a sense. LLMs are strange, because they can talk but not feel.
You can't argue that a series of tensor calculations are sentient (def. able to perceive or feel) - capable of experiencing life from the "inside". A dog is sentient by most definitions, it could be argued to have a "soul". When you look at a dog, the dog looks back at you. An LLM does not. It is not conscious, not "alive".
However an LLM does put on a fair appearance of being sapient (def. intelligent; able to think) - they contain large stores of knowledge and aside from humans are now the only other thing on the planet that can talk. You can have a discussion with one, you can tell it that it was wrong and it can debate or clarify using its internal knowledge. It can "reason" and anyone who has worked with one writing code can attest to this as they've seen their capability to work around restrictions.
It doesn't have to be sentient to be able to do this sort of thing, even though we used to think that was practically a prerequisite. Thus the philosophical confusion around them.
Even if this is simply a clever trick of a glorified autocomplete algorithm, this is something the dog cannot do despite its sentience. Thus an LLM with a decent number of parameters is "smarter" than a dog, and arguably more sapient.
No, not really. You're misunderstanding the words, and also vastly overestimating LLMs. LLMs such as the OpenAI™ models cannot reproduce dogs barking to the point of fooling humans or animals unless they're trained on dog barking data to the point of specialization. That's because they lack any general thinking capability, period.
Learning Algorithms require massive amounts of sample data to function, and pretty much never function outside of specific purposes such as predicting what word will come next in a sentence. I personally think that disqualifies them from sentience and sapience, but they could certainly pass a sentience written test.