this post was submitted on 09 Oct 2023
55 points (70.4% liked)

Technology

58303 readers
5 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Godfather of AI tells '60 Minutes' he fears the technology could one day take over humanity::Computer scientist and cognitive psychologist Geoffrey Hinton says despite its potential for good, AI could one day escape our control.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 1 year ago (10 children)

It will make us so unfathomably stupid WAY before it has the means to conquer us. I'm very interested in who is going to be the first to make the mistake of arming an AI with a physical presence in the world though.

We're kind of staring at the clock at this point to see who is the first asshole to create a Terminator scenario.

[–] [email protected] 17 points 1 year ago (2 children)

General AI doesn't exist.

That's it. If you gave an AI power to do anything it couldn't even order on Amazon for you (if a developer didn't program that functionality per hand). If we actually had general AI that could learn and improve itself the world would change in an instant.

What we have is machine learning, just an algorithm that takes input and gives you output. It can't act on its own. The massive problem is when you start to rely on that output (while not knowing the reasoning behind decisions happening inside the model). So for example the government trains a model for social security, every person gets some money each month. But in the training data something was off, suddenly your AI is racist and gives every black person a lesser amount. But because everyone thinks the AI knows best you can't argue against it.

And no, there is no intelligence there. You can't ask the AI "why did you give Mr. Smith $300, but Mr. Peters $150?" it doesn't know, it's just a model that wrangles numbers and spits something out. Even something seemingly intelligent like Chat GPT just guesses the next word in the output that should fit best. Super complicated, impressive, but in the background it's again only an algorithm. If you tell Chat GPT to go to a website and create an account, guess what? It can't.

[–] RonSijm 5 points 1 year ago (1 children)

What we have is machine learning, just an algorithm that takes input and gives you output. It can’t act on its own.

Isn't that basically what "real learning" is as well? Basically you're born as a baby, and you take input, and eventually you can replicate it, and eventually you can "talk" for example?

But in the training data something was off, suddenly your AI is racist and gives every black person a lesser amount.

Same here, how is that different from "real learning"? You're born into a racist family, in a racist village where everyone is racist. What is the end-result; you're probably somewhat racist due to racist input - until you might unlearn that, if you're exposed to other data that proves your racist ideas were wrong

If a human brain is basically a big learning computer, why wouldn't AI eventually reach singularity and emulate a brain and beyond? All the examples you mentioned of what it can't do, is just stuff it can't do yet

[–] [email protected] 4 points 1 year ago

All the AI we have today is, at its core, just pattern recognition.

ChatGPT can answer questions because it’s been shown a VERY large list of questions and their right answers. ChatGPT has no idea what the question is or what the answer means. It just has an algorithm that knows that a particular answer fits the pattern of “a correct answer” for that question better than any other answer.

It can’t “reason“ or “think” in any way. It’s not going to become self aware or set its own objectives. And so far we don’t have anything close to true general AI, we don’t even know if it’s possible.

There’re still risks from the current AI though. AI will sometimes find unanticipated and undesirable solutions that technically meet the goal it was given. A “Terminator” style future is unlikely without artificial general intelligence, but it’s not completely unreasonable to think of a scenario like “I, Robot” where a “dumb” AI subjugates humanity as a solution to a more altruistic goal like ending war or famine, because it’s a solution that matches the pattern it was told to look for.

load more comments (7 replies)