blakestacey

joined 1 year ago
MODERATOR OF
[–] [email protected] 11 points 6 months ago

Accurate use of the scare quotes around humor there, bro

[–] [email protected] 8 points 6 months ago (3 children)

I have to wonder though if the fact Google is generating this text themselves rather than just showing text from other sources means they might actually have to face some consequences in cases where the information they provide ends up hurting people.

Darn good question. Of course, since Congress is thirsty to destroy Section 230 in the delusional belief that this will make Google and Facebook behave without hurting small websites that lack massive legal departments (cough fedi instances)....

[–] [email protected] 20 points 6 months ago

... as one does?

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago)

I thought about assembling a kind of anti-Sequence reading list about quantum mechanics, a view from outside the cult shit that the Sequences try to drown you in, with their bad history, caricatured philosophy and mathematics that ranges from turgid to incorrect. The trouble is that a better understanding is not written all in one place, and even the good papers don't necessarily convey the everything Yud taught you is wrong emotional hook. The literature does not lead to cracking many smiles, though I did appreciate Adrian Kent's eel remark in this book review.

Some papers that have a bit more zing than average:

And, if you really want to dive into the waters and open your eyes below the surface:

[–] [email protected] 11 points 6 months ago (1 children)

#WormsTogetherStrong

[–] [email protected] 18 points 6 months ago (6 children)

If natively fluent speakers of the English language use beg the question in the "wrong" way time and time again, finding the "incorrect" meaning a natural fit with their understanding of the verb to beg, then the "incorrect" meaning may well be the one we should roll with.

[–] [email protected] 15 points 6 months ago (7 children)

Hmm, a xitter link, I guess I'll take a moment to open that in a private tab in case it's passingly amusing...

To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

OK, you have my attention now.

To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI employees, their friends, and adjacent entrepreneurs, which I have not previously spoken about publicly.

It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims. However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry.

I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women. They create a climate that can be very bad for female AI researchers, with broader implications relevant to X-risk and AGI safety. I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people.

There is nothing wrong imo with sex parties and heavy LSD use in theory, but combined with the shadow of 100B+ interest groups, leads to some of the most coercive and fucked up social dynamics that I have ever seen. The climate was like a fratty LSD version of 2008 Wall Street bankers, which bodes ill for AI safety.

Women are like canaries in the coal mine. They are often the first to realize that something has gone horribly wrong, and to smell the cultural carbon monoxide in the air. For many women, Silicon Valley can be like Westworld, where violence is pay-to-pay.

I have seen people repeatedly get shut down for pointing out these problems. Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink.

I am not writing this on the behalf of any interest group. Historically, much of OpenAI-adjacent shenanigans has been blamed on groups with weaker PR teams, like Effective Altruism and rationalists. I actually feel bad for the latter two groups for taking so many undeserved hits. There are good and bad apples in every faction. There are so many brilliant, kind, amazing people at OpenAI, and there are so many brilliant, kind, and amazing people in Anthropic/EA/Google/[insert whatever group]. I’m agnostic. My one loyalty is to the respect and dignity of human life.

I'm not under an NDA. I never worked for OpenAI. I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned. It’s likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley. I wish I could say more, but probably shouldn’t.

I will not pretend that my time among these circles didn’t do damage. I wish that 55% of my brain was not devoted to strategizing about the survival of me and of my friends. I would like to devote my brain completely and totally to AI research— finding the first principles of visual circuits, and collecting maximally activating images of CLIP SAEs to send to my collaborators for publication.

[–] [email protected] 13 points 6 months ago (3 children)

How about I ban you for being obnoxious instead?

[–] [email protected] 17 points 6 months ago

Kludging an "objective reduction" process into the dynamics is throwing out quantum mechanics and replacing it with something else. And because Orch-OR is not quantum mechanics, every observation that a quantum effect might be biologically important somewhere is irrelevant. Orch-OR isn't "quantum biology", it's pixie-dust biology.

[–] [email protected] 12 points 6 months ago (2 children)

Who needs usernames when you have "context clues" instead? :-P

[–] [email protected] 32 points 6 months ago* (last edited 6 months ago) (16 children)

The given link contains exactly zero evidence in favor of Orchestrated Objective Reduction — "something interesting observed in vitro using UV spectroscopy" is a far cry from anything having biological relevance, let alone significance for understanding consciousness. And it's not like Orch-OR deserves the lofty label of theory, anyway; it's an ill-defined, under-specified, ad hoc proposal to throw out quantum mechanics and replace it with something else.

The fact that programs built to do spicy autocomplete turn out to do spicy autocomplete has, as far as I can tell, zero implications for any theory of consciousness one way or the other.

view more: ‹ prev next ›