this post was submitted on 05 Jun 2024
91 points (96.0% liked)

Technology

37739 readers
510 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
(page 2) 22 comments
sorted by: hot top controversial new old
[–] qqq 2 points 5 months ago

Wake me up when nixpkgs issues decline significantly from 5k+ due to AI.

[–] [email protected] 1 points 5 months ago

He was interviewed after his septum replacement surgery, got a brand new teflon one

[–] [email protected] 1 points 5 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryIn an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.

Kokotajlo's spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn't accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.

The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology's progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.

Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to "pivot to safety" and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had "lost confidence that OpenAI will behave responsibly" as it continues trying to build near-human-level AI.

"We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," the company said in a statement after the publication of this piece.


Saved 56% of original text.

load more comments
view more: ‹ prev next ›