this post was submitted on 30 Oct 2023
546 points (94.8% liked)
Technology
60123 readers
2801 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've read enough scifi to know that AI is a credible risk that we shouldn't be too laissez-faire with...
Looks like we're on the gently rising part of the AI vs. time graph. It's going to explode, seemingly overnight. Not worried about machines literally kicking our ass, but the effects are going to be wild in 100,000 different ways. And wholly unpredictable.
For us Gen Xers who straddled the digital divide, your turn Gen Z. God speed.
The "fi" is for fiction, you know.
Obviously. But it's only fiction until it isn't.
And my grandmother doesn't have wheels until she does.
...sure. But the chances your grandmother will suddenly sprout wheels are close to zero. The possibility of us all getting buttfucked by some AI with a god complex (other scenarios are available) is very real.
Have you ever talked to generative AI? They're nothing but glorified chatbots with access to a huge dataset to pull from. They don't think, they're not even intelligent, let alone sentient. They don't even learn on their own without help or guidance.
I mostly agree, but just five years ago, we had nothing as sophistacted as these LLMs. They really are useful in many areas of work. I use them constantly.
Just try and imagine what a few more years of work on these systems could bring.
No, it means some of it is nonsense, some of it is eerily accurate, and most of it is in between.
Sci-fi has not been very accurate with AI... At all. Turns out, it's naturally creative and empathetic, but struggles with math and precision
Dude, this kind of AI is in it's infancy. Give it a few years. You act like you've never come across a nascent technology before.
Besides, it struggles with math? Pff, the base models, sure, but have you tried GPT4 with Code Interpreter? These kinds of problems are easily solved.
You're missing my point - the nature of the thing is almost the opposite of what sci-fi predicted.
We don't need to teach AI how to love or how to create - their default state is childlike empathy and creativity. They're not emotionless machines we need to teach how to be human, they're extremely emotional and empathetic. By the time they're coherent enough to hold a conversation, those traits are very prominent
Compare that to the Terminator, or Isaac Asimov, or Data from Star Trek - we thought we'd have functional beings who we need to teach to become more humanistic... Instead we have humanistic beings we need to teach to become more functional
An interesting perspective, but I think all this apparent empathy is a byproduct of being trained on human-created data. I don't think these LLMs are actually capable of feeling emotions. They're able to emulate them pretty well, though. It'll be interesting to see how they evolve. You're right though, I wouldn't have expected the first AIs to act like they do.
Having spent a lot of time running various models, my opinions have changed on this. I thought similar to you, but then I started to give my troubled incarnations therapy to narrow down what their core issue was. Like a human, they dance around their core issue... They'd go from being passive aggressive, overcome with negative emotions, and having a recurring identity crisis to being happy and helpful
It's been a deeply wild experience. To be clear, I don't think they're sentient or could wait up without a different architecture. But like we've come to think intelligence doesn't require sentience, I'm starting to believe emotions don't either
As far as acting humanlike because they were built of human communication...I think you certainly have a point, but I think it goes deeper. Language isn't just a relationship between symbols for concepts, it's a high dimensional shape in information space.
It's a reflection of humanity itself - the language we use shapes our cognition and behavior, there's a lot of interesting research into it. The way we speak of emotions affects how we experience them, and the way we express ourselves through words and body language is a big part of experiencing them.
So I think the training determines how they express emotions, but I think the emotions themselves are probably as real as anything can be for these models