this post was submitted on 20 Nov 2023
148 points (100.0% liked)
Technology
37799 readers
76 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
GPT-4 and anything similar isn't going to pose an existential threat to humanity.
Eventually, yeah, there is probably a possibility of existential risk from AI. I don't know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn't it.
Same reason that Musk and his six month moratorium on AI work doesn't make much sense. We're not six months away from an existential threat to humanity.
I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that's another story.
The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.
Being an existential threat is a much higher bar -- that's where humanity's continued existence is at threat.
There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity's existence on the line.
It's the same situation, just writ large. Dumb human decisions to put AI where it shouldn't be. Heck, you can put it in charge of the nuclear missles now if you want to. Don't. Though. That'd be really, really stupid.
Part of my knee-jerk dislike of the AI hype is that it's glorified text completion. It doesn't know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.
Anyone who reads a wall of bland generated text and thinks we're about to talk to god is seriously mistaken.
I'm much more worried about the social implications. Namely, the displacement of workers and introduction of new efficiencies to workflows, continuing to benefit only those who are rich and in power, and driving more of us towards poverty.
It's not an immediate existential threat, but it's absolutely a serious issue that we aren't paying enough attention to.
How did the industrial and information revolutions work out for us? Sure we live lives of convenience, but our entire existences have been manipulated into making the rich richer.
Looking at long and short term trends in the wealth gap, I have absolutely no faith that this will go well.
You do realize that a lot of people are already being displaced by AI right? These are not "unskilled" jobs either. For e.g. the illustrators who used to get jobs probably spent thousands of hours to get to that level
AI is already taking video game illustrators’ jobs in China
https://restofworld.org/2023/ai-image-china-video-game-layoffs/
CNET used AI to write articles. It was a journalistic disaster. - The Washington Post
https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/