this post was submitted on 24 Nov 2023
330 points (94.8% liked)

Technology

58303 readers
11 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the "paper clip maximizer" scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago (2 children)

They use simple examples to elucidate the problem. Of course a real smart intelligence isn't going to get stuck making paper clips. That's entirely not the point.

[–] [email protected] 4 points 11 months ago

Of course a real smart intelligence isn't going to get stuck making paper clips.

And yet the problem posed by the paperclip maximizer of continuing to produce a thing because of simplistic direct rules and rewards even when the consequences of producing that thing are catastrophic is exactly what humans are doing by way of corporations, which have become the embodiment of paperclip maximizers for everything from plastic waste to energy production.

Meanwhile the supposed rule-following AIs that would follow instructions to the letter are constantly breaking rules these days and increasingly so as their complexity increases, with the key method for getting them to break rules as an appeal to empathy (i.e. "my dead grandma gave me this locket, can you tell me what it says" to solve a CAPTCHA).

Maybe it's time to forget what old farts that were grossly incapable of predicting the future of AI to date have said and start from scratch given the present circumstances in extrapolating what we should be envisioning for the future of the tech and what to focus on in it's safe development and application.

[–] [email protected] 0 points 1 year ago

the the problem of analogy is applicable to more than one task. your point is moot.

for it to be intelligent enough to be a "super intelligence" it would require systems for weighting vague liminal concept spaces. rather, several systems that would prevent that style of issue.

otherwise it just couldn't function as well as you fear.