this post was submitted on 14 Apr 2024
28 points (100.0% liked)

TechTakes

1402 readers
182 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 7 months ago* (last edited 7 months ago) (16 children)

Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?

This professor is arguing we need to regulate AI because we haven't found any space aliens yet and the most conceivably explanation why is that they all wiped themselves out with killer AIs.

And hits some of the greatest hits:

  • AI will nuke us all because the nuclear powers are so incompetent they'd hook the bombs up to Chat-GPT.
  • AI will wipe us out with a killer virus for reasons
  • We may not be adorable enough towards AI to prevent being vaporized even if we become cyborgs 🥺
  • AI will wipe out an entire planet. Solution: we need people on a bunch of different planets and space-stations to study it "safely"
  • Um actually space aliens would all be robots. Be free from your flesh prisons!

Zero mentions of global warming of course.

I kinda want to think that the author has just been reading some weird ideas. At least he put himself out there and wrote a paper with human sentences! It's all aboard the AI hype train for sure, and constantly makes huge logical leaps, but it somehow doesn't make me feel as skeezy as some of the other stuff on here.

[–] [email protected] 11 points 7 months ago (2 children)

I hate that you can't mention the Fermi paradox anymore without someone throwing AI into the mix. There's so much more interesting discussions to have about this than the idea that we're all gonna be paperclipped by some future iteration of spicy autocomplete.

But what's even worse is that those munted dickheads will then claim that they have also found the solution to the Fermi paradox, which is, of course, to give more money to them so they can make their shitty products ~~even worse~~ safer.

Also:

AI could spell the end of intelligence on Earth (including AI) [...]

Somehow Clippy 9000 that's clever enough to outsmart the entirety of the human race because it's playing 4D chess with multiverse time travel, is, at the same time, too stupid to come up with any plan that doesn't kill itself in the end, too?

[–] [email protected] 8 points 7 months ago

Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

load more comments (1 replies)
load more comments (14 replies)