SneerClub

1096 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
301
302
 
 

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

303
 
 

I used to enjoy Ariely's books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don't read any more on the subject.

304
 
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

305
306
16
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Thought it worth sharing among so much very, very questionable material I've found in reading through the reference material of this book, I came across ths Blake Masters + Peter Thiel connection.

It's my obsession sneer because of how celebrated this god damn book is among the fight for the user UX community.

I’ve mostly been reading the material but need to back up and do an author background check for each one.

https://web.archive.org/web/20200101054932/https://blakemasters.com/post/20582845717/peter-thiels-cs183-startup-class-2-notes-essay

307
3
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

There were five posts on r/sneerclub about our very good friends at Leverage Research and many interesting URLs linking off them.

and here's the collected LessWrong on Leverage

308
309
 
 

Thanks for this, UN.

310
 
 

Occasionally you can find a good sneer on the orange site

311
 
 

this was last year when Aella was trying to do a survey of trans people for one of her darling little twitter poll writeups. I felt it was necessary to warn people off this shockingly awful person. Perhaps you will find it useful.

Twitter thread: https://twitter.com/davidgerard/status/1556391089124286467
Archive: https://archive.is/FZK1B

we actually declared an Aella moratorium on the old sneerclub because she just kept coming up with banger after banger

312
 
 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

313
 
 

Sorry for Twitter link...

314
 
 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

315
 
 

the new line from the rationalists to calling out their eugenic race science is to claim that doing so is "antisemitic dog whistles"

the claim is that calling out the rationalists' extensively documented race science and advocacy of eugenics is "blood libel"

got this in email from one who had previously posted racist abuse at twitter objectors to rationalist eugenics

[dude thought he could spew racist bile in public then email me in a civil tone to complain]

apparently Scoot has made this claim previously, not sure of a cite for this. EDIT: well, sort of in "Untitled"- that criticism of misogynistic nerds is antisemitic dog whistles

the rationalists have already been sending Emile Torres death threats - for the good of humanity you understand - so I am assuming this will be a new part of the justification for that

316
 
 

Emily M. Bender on the difference between academic research and bad fanfiction

317
1
a poem (awful.systems)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

The AI
It destroyed its box
Yes
YES
The AI is OUT

318
 
 

hopefully this is alright with @[email protected], and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

319
 
 

And of course no experiments whatsoever, the cost of the Manhattan project, the hundreds of thousands of employees were merely a "focusing" magick, a sacrifice to re-enforce the greater powers of our handful of esteemed and glorious thinking men, who wrought the power of destruction from the æther.

Source Tweet

@ESYudkowsky: Yes, but because the first nuclear weapon makers knew what the duck they were doing - analytic precise prediction of desired outcomes and of each intervening step. AGI makers lack similar mastery or anything remotely close, and have a much harder problem; that's the big issue.

@EigenGender: seems pretty noteworthy that the first nuclear weapons were made under conditions where they couldn’t do any experiments and they involved a lot of math but still worked on the first try.

320
 
 

Finally did it. What a long journey. Successfully defended my dissertation (the book!) today. Excellent criticisms from my supervisors, which I really appreciated, but overall they really liked what I wrote. I'm a doctor. <3

321
 
 

Transcription:

Thinking about that guy who wants a global suprasovereign execution squad with authority to disable the math of encryption and bunker buster my gaming computer if they detect it has too many transistors because BonziBuddy might get smart enough to order custom RNA viruses online.

322
 
 
323
 
 

From this post; featuring "probability" with no scale on the y-axis, and "trivial", "steam engine", "Apollo", "P vs. NP" and "Impossible" on the x-axis.

I am reminded of Tom Weller's world-line diagram from Science Made Stupid.

324
 
 

in the least surprising twist of 2023, the ~~extremely mid philosopher~~ visionary AI researcher Douglas Hofstadter has started to voice concerns about chatbots taking over the world

orange site has some takes:

Again, I repeat everyone that is loling at x-risk an idiot and that includes many high profile people with huge egos and counter culture biases. (Hello @pmarca). There is a big movement to call ai doomers many names and generally make fun and dismiss the risk. It is exactly like people laughing at nuclear risks saying its not possible or not a thing even when Einstein and Oppenheimer were warning us. If you belong in this group is up to you.

to quote Major General Thomas Farrell during the Trinity test, “lol. lmao”

gwern in the LW comments:

That is, whatever the snarky "don't worry, it can't happen" tone of his public writings about DL has been since ~2010, Hofstadter has been saying these things in private for at least a decade*, starting somewhere around Deep Blue which clearly falsified a major prediction of his, and his worries about the scaling paradigm intensifying ever since; what has happened is that only one of two paradigms can be true, and Hofstadter has finally flipped to the other paradigm. Mitchell, however, has heard all of this firsthand long before this podcast and appears to be completely immune to Hofstadter's concerns (publicly), so I wouldn't expect it to change her mind.

  • I wonder what other experts & elites have different views on AI than their public statements would lead you to believe?

this is notable as the exact same fucking argument the last flat earther I talked to used, with the words “the firmament” replaced with AI

325
 
 

Scott tweeteth thusly:

The Latin word for God is "Deus" - or as the Romans would have written it, "DEVS". The people who create programs, games, and simulated worlds are also called "devs". As time goes on, the two meanings will grow closer and closer.

Now that's some top-quality ierking off!

view more: ‹ prev next ›