this post was submitted on 20 Apr 2024
69 points (97.3% liked)

Community TV

66 readers
1 users here now

Welcome to Greendale, Lemmings! You're already accepted!

Rules:

  1. Be kind to each other.
  2. No illegal, intentionally offensive, or NSFW content.

founded 8 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 6 months ago

Those are valid concerns and ones they don't really seem to have answered yet, which makes the pace at which they're progressing irresponsible. There was an article a year or so about a simulated experiment with an AI pilot where it got points for bombing a target successfully, and lost points for not bombing the target. But it had to get approval from a human operator before striking the target. The human told it no, so it killed the human, and then bombed the target. So they told it that it can't kill the human or it will lose all its points. So it attacked the communication equipment that the human used to tell it no before the human could tell it no and then bombed the target. This was all a simulation, so no humans were actually killed, but that raised all sorts of red flags. I'm sure they've put hundreds of hours into research since then, but ultimately it's hard not to feel like this will backfire. Perhaps that's just because a lifetime of being conditioned by Terminator and Matrix movies, but some of the evidence so far like that experiment proves that it's not an outlandish concern. I don't see how humans can envision ever possible scenario in which the AI might go rogue. Hopefully they have a great off switch.