this post was submitted on 20 Nov 2023
109 points (100.0% liked)
technology
22683 readers
1 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I understand the conflict in OpenAI correctly, it's a schism between the folks who actually believe in the Skynet threat (lead by chief scientist Ilya Sutskever) and those who (correctly) understand the Skynet fears are a marketing tool (Altman and Microsoft).
I always knew that Microsoft was going to cannibalize OpenAI, but I assumed it was going to be via taking over the infrastructure/IP and booting Altman. It looks like it's the other way around, with them cannibalizing OpenAI's staff: Hundreds of OpenAI employees threaten to resign and join Microsoft. From this article, OpenAI has 700 employees, and basically all of them are threatening to join Microsoft.
I'm personally quite glad that the scientists have a serious perspective on this. At the very least it means they might withhold their labour if they deem it unsafe at any time.
It's far too early for it to be unsafe and you're correct that it's marketing at the moment, but it's still good that they take it so seriously they're willing to break companies over it. It bodes well that they conflict so hard early on for when things get into actual dangerous territory.
The problem though is that this is pure idealism on the part of the scientists. There’s no way anything approaching AGI can be kept under wraps by some scientists no matter how benevolent they consider themselves. And given our current economic structure once that cat is out of the bag it’ll be hell for the rest of us.
I’ve seen a lot of talk on the orange site about AI doomerism. I’m not doomer about AI I’m doomer about our society being the wrong structure to handle it.
On one side we have eugenicist doomsday cultists and on the other side you have just your normal opportunist techbro (who's also a eugenicist).