SneerClub

1012 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
251
252
 
 

@sneerclub

Greetings!

Roko called, just to say he's filed a trademark on Basilisk™ and will be coming after anyone who talks about it for licensing fees which will go into his special Basilisk™ Immanetization Fund and if we don't pay up we'll burn in AI hell forever once the Basilisk™ wakes up and gets around to punishing us.

Also, if you see your mom, be sure and tell her SATAN!!!!—

253
3
Universal Watchtowers (awful.systems)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

by Monkeon, from the b3ta Mundane Video Games challenge

254
 
 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

255
 
 

This totally true anecdote features a friend who "can't recall the names of his parents [but] remember[s] the one thing he'd be safer forgetting."

256
 
 

Discussion on AI starts at about 17mins. The Bas(ilisk) drop happens at 20:30. Sorry if ads mess up my time stamps. I think this is the second time it’s come up on the show.

257
258
 
 

Source Tweet

@ESYudkowsky: Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

|

And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer?


Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

259
 
 

really: https://archive.ph/p0jPI

Roko’s twitter is an absolutely reliable guide to how recently a woman with dyed hair and facial piercings kicked him in the nuts again

260
261
262
 
 

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

263
 
 

I used to enjoy Ariely's books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don't read any more on the subject.

264
 
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

265
266
16
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Thought it worth sharing among so much very, very questionable material I've found in reading through the reference material of this book, I came across ths Blake Masters + Peter Thiel connection.

It's my obsession sneer because of how celebrated this god damn book is among the fight for the user UX community.

I’ve mostly been reading the material but need to back up and do an author background check for each one.

https://web.archive.org/web/20200101054932/https://blakemasters.com/post/20582845717/peter-thiels-cs183-startup-class-2-notes-essay

267
3
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

There were five posts on r/sneerclub about our very good friends at Leverage Research and many interesting URLs linking off them.

and here's the collected LessWrong on Leverage

268
269
 
 

Thanks for this, UN.

270
 
 

Occasionally you can find a good sneer on the orange site

271
 
 

this was last year when Aella was trying to do a survey of trans people for one of her darling little twitter poll writeups. I felt it was necessary to warn people off this shockingly awful person. Perhaps you will find it useful.

Twitter thread: https://twitter.com/davidgerard/status/1556391089124286467
Archive: https://archive.is/FZK1B

we actually declared an Aella moratorium on the old sneerclub because she just kept coming up with banger after banger

272
 
 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

273
 
 

Sorry for Twitter link...

274
 
 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

275
 
 

the new line from the rationalists to calling out their eugenic race science is to claim that doing so is "antisemitic dog whistles"

the claim is that calling out the rationalists' extensively documented race science and advocacy of eugenics is "blood libel"

got this in email from one who had previously posted racist abuse at twitter objectors to rationalist eugenics

[dude thought he could spew racist bile in public then email me in a civil tone to complain]

apparently Scoot has made this claim previously, not sure of a cite for this. EDIT: well, sort of in "Untitled"- that criticism of misogynistic nerds is antisemitic dog whistles

the rationalists have already been sending Emile Torres death threats - for the good of humanity you understand - so I am assuming this will be a new part of the justification for that

view more: ‹ prev next ›