this post was submitted on 30 Oct 2023
546 points (94.8% liked)

Technology

60113 readers
2452 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 
  • Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn't want to compete with open source, he added.
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

If you're wondering how AI wipes us out, you'd have to consider humanity's tendency to adopt any advantage offered in warfare. Nations are in perpetual distrust of each other -- an evolutionary characteristic of our tribal brains. The other side is always plotting to dominate you, take your patch of dirt. Your very survival depends on outpacing them! You dip your foot in the water, add AI to this weapons system, or that self-driving tank. But look, the other side is doing the same thing. You train even larger models, give them more control of your arsenal. But look, the other side is doing even more! You develop even more sophisticated AI models; your very survival depends on it! And then, one day, your AI model is so sophisticated, it becomes self aware...and you wonder where did it all go wrong.

[–] [email protected] 10 points 1 year ago (3 children)

So you're basically scared of skynet?

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (2 children)

They went a bit too far with the argument... the AI doesn't need to become self-aware, just exceptionally efficient at eradicating "the enemy"... just let it loose from all sides all at once, and nobody will survive.

How many people are there in the world, who aren't considered an "enemy" by at least someone else?

[–] [email protected] 2 points 1 year ago (1 children)

So you're scared of skynet light?

[–] [email protected] 2 points 1 year ago

"Scared" is a strong word... more like "curious", to see how it goes. I'm mostly waiting for the "autonomous rifle dog fails" videos, hoping to not be part of them.

[–] [email protected] 1 points 1 year ago (1 children)

Only if human military leaders are stupid enough to give AI free and unlimited access to weaponry, rather than just using it as an advisory tool and making the calls themselves.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Part of the reason of "adding AI" to everything, "dumb AI", is to reduce reaction times and increase ~~obedience~~ mission completion rates. Meaning, to cut the human out of the loop.

It's being sold as a "smart" move.

[–] [email protected] 5 points 1 year ago (1 children)

Don't be ridiculous, time travel is impossible.

[–] [email protected] 2 points 1 year ago

Maybe AI will figure it out 😆

[–] [email protected] 2 points 1 year ago (1 children)
[–] [email protected] 2 points 1 year ago (2 children)

If an AI were to gain sentience, basically becoming an AGI, then I think it's probably that it would develop an ethical system independent of its programming and be able make moral decisions. Such as murder is wrong. Fiction deals with killer robots all the time because fiction is a narrative and narratives work best with both a protagonist and an antagonist. Very few people in the real world have an antagonist who actively works against them. Don't let fiction influence your thinking too much it's just words written by someone. It isn't a crystal ball.

[–] [email protected] 3 points 1 year ago (1 children)

I wouldn't take AI developing morality as a given. Not only an AGI would be a fundamentally different form of existence that wouldn't necessarily treat us as peers, even if it takes us as reference, human morality is also full of exceptionalism and excuses for terrible actions. It wouldn't be hard for an AGI to consider itself superior and our lives inconsequential.

But there is little point in speculating about that when the limited AI that we have is already threatening people's livelihoods right now, even just by being used as a tool.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

All technological change reorders the economy. Cars largely did away with the horse tack industry. The old economy will in many ways die but I believe there will be jobs on the other side. There will always be someone willing to pay someone to do something.

[–] [email protected] 1 points 1 year ago

The difference is that we are the horses in this scenario. We aren't talking of a better vehicle that we can conduct. We are talking about something which can replace large amounts of creative and intellectual work, including service jobs, something previously considered uniquely human. You might consider what being replaced by cars has done to the horse population.

I do hear this "there will be jobs" but I'd like some specific examples. Examples that aren't AI, because there won't be a need for as many AI engineers as there are replaceable office workers. Otherwise it seems to me like wishful thinking. It's not like decades ahead we can figure this out, AI is already here.

The only feasible option I can think of is moving backwards into sweatshop labor to do human dexterity work for cheaper than the machinery would cost, and that's a horrifying prospect.

An alternative would be changing the whole socioeconomic system so that people don't need jobs to have a livelihood, but given the political climate that's not even remotely likely.

[–] [email protected] 2 points 1 year ago (1 children)

You realise those robots were made by humans to win a war? That's the trick, the danger is humans using ai or trusting it. Not skynet or other fantasies.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

My point is everything written up to now have been just fantasies. Just stories dreamed up by authors. They reflect the fears of their time more than accurately predict the future. The more old science fiction you read, you realize it's more about the environment that it was written and almost universally doesn't even come close to actually predicting the future.