this post was submitted on 22 Sep 2023
52 points (91.9% liked)

Technology

58303 readers
9 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The organizers of a high-profile open letter last March calling for a "pause" in work on advanced artificial intelligence lost that battle, but they could be winning a longer-term fight to persuade the world to slow AI down.

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 31 points 1 year ago (2 children)

The only reason Musk signed that letter was because he missed the boat on AI, not for any altruistic reasons

[–] [email protected] 20 points 1 year ago

One reason the letter exists in the first place is because the current leaders in AI would love to pull the ladder up behind them. That’s why they have fostered much of the doom mongering around the technology, which has lead to so many asking for a pause. A pause during which they can solidify their own positions and cut off competition by skewing AI regulation in their favour.

Some of the signatories of the letter are already openly calling for open source AI to be outright banned, because apparently only corporations like OpenAI can be trusted with it.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago)

Came here to say, he's only concerned because he wasn't top dog. If Tesla AI was outperforming GPT, he and his fanboys would call it communist to try and stop AI from being developed.

[–] [email protected] 9 points 1 year ago (2 children)

I mean, yeah, we probably should be taking a look at how things will be affected, things like the Hollywood strike are heavily about that. I highly doubt that made any given team slow their own work though.

But yeah, we will need laws and shit. Like if you make a sentient robot and it kills someone, do you get in trouble? That might require a new law, I don't know. So yea, nothing wrong with taking a look at potential fallout. It's not a zero-sum thing though.

[–] DrDeadCrash 2 points 1 year ago (1 children)

That's all well and good, but the work will not stop...

[–] [email protected] 2 points 1 year ago

Yea, exactly. And to further expand, everyone should focus on their work, the work they're trained for and good at. The people that are trained for and good at exploring potential fallout are lawyers, philosophers, historians and doctors I suppose. Probably missed a few.

These are different folks from the people building the actual things. They're specialized in building things, not exploring potential ramifications. It's a different skillset, and while not mutually exclusive, they're certainly distinct from each other. Having one does not come with the other.

This is why its not zero-sum. The people deciding what is right and wrong to build (with laws) and the people doing the building are not and should not be the same people. Since the "teams" are different, the work of one does not need to slow another. Nor should we really slow, because we have heavy international competition in this field and frankly cannot afford to fall behind in capability. That would almost certainly create an even greater risk than blundering ahead, since other people would just blunder ahead without us. That gets us nothing.

We as citizens have work in this field as well, to discuss these things around water coolers, dinner tables and forums, and in articles, books and conferences, to decide how we ourselves feel about the issue and how it'll affect our fields. Shits changing fast.

[–] [email protected] 2 points 1 year ago

I’m not worried about ChatGPT becoming sentient and enslaving humanity but it is raising novel questions about intellectual property and showing how outdated our rules are.

Still, this kind of change isn’t a first. We will see lawsuits and good/bad legislative efforts. Europe will wait about 15 years and then write the definitive legislation for the world to emulate.

Life will go on.