this post was submitted on 12 Jul 2023
26 points (86.1% liked)
Singularity | Artificial Intelligence (ai), Technology & Futurology
30 readers
1 users here now
About:
This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.
Rules:
- Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
- No Low-quality/Wildly Speculative Posts.
- Keep posts on topic.
- Don't make posts with link/s to paywalled articles as their main focus.
- No posts linking to reddit posts.
- Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
- Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
- Please be respectful to each other.
- No summaries made by LLMs. I would like to keep quality of comments as high as possible.
- (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
- No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
- If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
- Please be respectful to each other.
Related sublemmies:
[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)
Note:
My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not all that scared of an AI singularity event. If (when) AI reaches superintelligence, it will be so far ahead of us, we'll be at best like small children to it, probably closer in intelligence to the rest of the 'Great Apes' than to itself. When people tend to talk about AI taking over it seems to go that they will destroy us to protect themselves in advance or something similar....but we didn't need to destroy all the other animals to take over the planet (yes we destroy them for natural resources, but that's because we're dumb monkeys that can't think of a better way to get things).
It probably just....wouldn't care about us. Manipulate humanity in ways we never even comprehend? Sure. But what's the point of destroying humans? Even if we got in their way. If I have an ant infestation I set some traps and the annoying ones die (without ever realizing I was involved) and I just don't care about the one outside not bothering me.
My hope/belief is that AGI will see us as ants and not organic paperclip material...
You are simply anthropomorphizing AGI. Superintelligence will be highly capable, but it is unlikely to possess consciousness, values or its own goals, similar to humans. Once given a goal, if it requires extensive resources and the AGI is not properly aligned, it may inadvertently cause harm to humanity in its pursuit of gathering the necessary resources to achieve its objective, much like humans constructing roads without concern for ants.
I don't think I'm anthropromorphising, and I think the road construction example is what I was already talking about. It likely won't care about us for good or bad. That's the opposite of anthropromorphism. When we build roads maybe some ants are inadvertently killed, but part of the construction plan isn't "destroy all ants on the Earth" Yes it can certainly cause harm, but there is a very large range of scenarios between "killed a few people" and "full on human genocide" and I have for many years seen people jump immediately to the extremes.
I think it's besides the point but I disagree that an AI (which will be trained on the entirety of human knowledge) would not at least have a passing knowledge of human ethics and values, and while consciousness as we perceive it may not be required for intelligence, there is a point where, if it acts exactly as a conscious human would, the difference is largely semantic.
For me, the most-likely limiting factor is not the ability of a superintelligent AI to wipe out humanity -- I mean, sure, in theory, it could.
My guess is that the most-plausible potentially-limiting factor is that a superintelligent AI might destroy itself before it destroys humanity.
Remember that we (mostly) don't just fritz out or become depressed and suicide or whatever -- but we obtained that robustness by living through a couple billions of years of iterations of life where all the life forms that didn't have that property died. You are the children of the survivors, and inherited their characteristics. Everything else didn't make it. It was that brutal process over not thousands or millions, but billions of years that led to us. And even so, we sometimes aren't all that good at dealing with situations different to the one in which we evolved, like where people are forced to live in very close proximity for extended periods of time or something like that.
It may be that it's much harder than we think to design a general-purpose AI that can operate at a human-or-above level that won't just keel over and die.
This isn't to reject the idea that a superintelligent AI could be dangerous to humanity at an existential level -- just that it may be much harder than it might seem for us to create a superintelligent AI that will stay alive, harder to get to that point than it might seem. Obviously, given the potential utility of a superintelligent AI, people are going to try to create it. I am just not sure that they will necessarily be able to succeed.