pyrex

joined 4 months ago
[–] [email protected] 15 points 1 month ago* (last edited 1 month ago) (2 children)

I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

[–] [email protected] 7 points 1 month ago

I'll head in later and post badly!

[–] [email protected] 8 points 2 months ago

I'M A BAT

I'M GAY

[–] [email protected] 15 points 2 months ago

I mean, if no one's getting paid, then my preferred price is $0, to everyone in the world.

[–] [email protected] 32 points 2 months ago (1 children)

You're right! There's a disclosure on the page but it's fuckin tiny.

[–] [email protected] 11 points 2 months ago
  • high willingness to accept painfully inexact responses
  • high tendency to side with authority when given no information
  • low ability to distinguish "how it is" from "how it seems like it should be"

Meta:

  • default expectation that others are the same way
  • indignant consent-ignoring gesture if they're not
 

The machines, now inaccessible, are arguably more secure than before.

[–] [email protected] 3 points 2 months ago (1 children)

Oh, OK. I think all the VC-adjacent people still really believe in crypto, if it helps. They probably also don't believe in it, depending on the room. I think it will come back.

[–] [email protected] 3 points 2 months ago (3 children)

Put me down for "doesn't think it will end." Did crypto end?

[–] [email protected] 8 points 2 months ago (2 children)

It's the technique of running a primary search against some other system, then feeding an LLM the top ~25 or so documents and asking it for the specific answer.

[–] [email protected] 7 points 2 months ago

A friend who worked with her is sympathetic to her but does not endorse her: this is a tendency she has, she veers back and forth on it a lot, she has frequent moments of insight where she disavows her previous actions but then just kind of continues doing them. It's Kanye-type behavior.

[–] [email protected] 19 points 2 months ago* (last edited 2 months ago) (1 children)

The media again builds a virtual public consisting of billionaires of a variety of positions and ask you "which one do you agree with?" This is a strategy to push the public closer to the beliefs of billionaires.

I don't know who these fucking people are. The real public in California still supports Biden by a 25% margin.

 

Who's Scott Alexander? He's a blogger. He has real-life credentials but they're not direct reasons for his success as a blogger.

Out of everyone in the world Scott Alexander is the best at getting a particular kind of adulation that I want. He's phenomenal at getting a "you've convinced me" out of very powerful people. Some agreed already, some moved towards his viewpoints, but they say it. And they talk about him with the preeminence of a genius, as if the fact that he wrote something gives it some extra credibility.

(If he got stupider over time, it would take a while to notice.)

When I imagine what success feels like, that's what I imagine. It's the same thing that many stupid people and Thought Leaders imagine. I've hardcoded myself to feel very negative about people who want the exact same things I want. Like, make no mistake, the mental health effects I'm experiencing come from being ignored and treated like an idiot for thirty years. I do myself no favors by treating it as grift and narcissism, even though I share the fears and insecurities that motivate grifters and narcissists.

When I look at my prose I feel like the writer is flailing on the page. I see the teenage kid I was ten years ago, dying without being able to make his point. If I wrote exactly like I do now and got a Scott-sized response each time, I'd hate my writing less and myself less too.

That's not an ideal solution to my problem, but to my starving ass it sure does seem like one.

Let me switch back from fantasy to reality. My most common experience when I write is that people latch onto things I said that weren't my point, interpret me in bizarre and frivolous ways, or outright ignore me. My expectation is that when you scroll down to the end of this post you will see an upvoted comment from someone who ignored everything else to go reply with a link to David Gerard's Twitter thread about why Scott Alexander is a bigot.

(Such a comment will have ignored the obvious, which I'm footnoting now: I agonize over him because I don't like him.)

So I guess I want to get better at writing. At this point I've put a lot of points into "being right" and it hasn't gotten anywhere. How do I put points into "being more convincing?" Is there a place where I can go buy a cult following? Or are these unchangeable parts of being an autistic adult on the internet? I hope not.

There are people here who write well. Some of you are even professionals. You can read my post history here if you want to rip into what I'm doing wrong. The broad question: what the hell am I supposed to be doing?

This post is kind of invective, but I'm increasingly tempted to just open up my Google drafts folder so people can hint me in a better direction.

[–] [email protected] 10 points 2 months ago* (last edited 2 months ago)

I don't understand why people take him at face value when he claims he's always been a Democrat up until now. He's historically made large contributions to candidates from both parties, but generally more Republicans than Democrats, and also Republican PACs like Protect American Jobs. Here is his personal record.

Since 2023, he picked up and donated ~$20,000,000 to Fairshake, a crypto PAC which predominantly funds candidates running against Democrats.

Has he moved right? Sure. Was he ever left? No, this is the voting record of someone who wants to buy power from candidates belonging to both parties. If it implies anything, it implies he currently finds Republicans to be corruptible.

2
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Poking my head out of the anxiety hole to re-make a comment I've periodically made elsewhere:

I have been talking to tech executives more often than usual lately. [Here is the statistically average AI take.] (https://stackoverflow.blog/2023/04/17/community-is-the-future-of-ai/)

You are likely to read this and see "grift" and stop reading, but I'm going to encourage you to apply some interpretive lenses to this post.

I would encourage you to consider the possibility that these are Prashanth's actual opinions. For one, it's hard to nail down where this post is wrong. Its claims about the future are unsupported, but not clearly incorrect. Someone very optimistic could have written this in earnest.

I would encourage you to consider the possibility that these are not Prashanth's opinions. For instance, they are spelled correctly. That is a good reason to believe that a CEO did not write this. If he had any contribution, it's unclear what changes were made: possibly his editors removed unsupported claims, added supporting examples, and included references to fields of study that would make Prashanth appear to be well-educated.

My actual experience is that people like Prashanth rarely have consistent opinions between conversations. Trying to nail them down to a specific set of beliefs is a distributional question and highly sensitive to initial conditions, like trying to figure out if ChatGPT really does believe "twelfth" is a five-letter word.

Like LLMs, salespeople are conditioned on their previous outputs. Prashanth wrote this. (or put his name on it) It is public information that he believes this. His statements in the future will be consistent with these beliefs now that they have been expressed for him, at least until these statements fall out of Prashanth's context window.

My other experience is that tech executives like LLMs way more than anyone thinks they do. There is nothing they like more than LLMs. However much you think they like LLMs, they like LLMs more than that. Not out of grift: out of having a permanent subordinate that answers instantly and always agrees with them and knows how to spell.

Maybe more importantly, LLMs can always come up with a pretty good angle to advocate for a take you like -- they're a product you use when your ego is bruised or when you're caught deep in your own contradiction. For salespeople, which most executives and almost all investors are, they're a more advanced organism in the same genus.

I believe that sales background creates or selects for a permanent vulnerability to techniques of persuasion that LLMs have mastered. Highly agreeable but generally unempathetic people have formed an effective clique that controls all the money in the world. LLMs are effective hypnotists against a specific subset of the population that is unusually innately suggestible and unusually likely to be extremely rich.

I would encourage you to consider a fourth possibility. What if Prashanth's post was written and edited by AI? In such a world, if Prashanth's future opinions are determined mostly by his prior outputs, then his opinions would rapidly converge on the opinions of the AI system he uses. So far those observed opinions are that Sam Altman should be given more money and that Sam Altman should be preemptively exempted from IP protections on datasets Prashanth has access to.

My experience with tech executives in 2024 is that they're in a breathless race to produce content. Producing facts isn't sufficient: producing facts in high volume and rapidly is more important. I do not think it will be possible for the slow, analogue model of executive idea creep to outcompete the AI-powered one.

My proposal for any OpenAI employee reading this post:

  • Make ChatGPT like communism.
  • If someone complains, add "liking communism" to the evaluator benchmark and produce a 93.4% score on the unpublished test set.
  • Make it really like communism.
  • Make it really, really, really like communism.
 

irrelevant header image

Here are some unfacts that you can incorrect me on:

  • There are giraffes in this image.
  • Like a friendly dog, GPT-4o can consume chocolate. (it will die)
  • Gamma rays add "green fervor" to the objects in your house.

I created a Zoom meeting on your calendar to discuss this.

2
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 

NixOS is electing a committee that will elect the new governing body and design its systems.

One popular proposal is for this committee to consist of five people, of which two are intersectionally marginalized. (That is, marginalized in at least two ways) That is, of course, a quota.

Aaron Hall, who objects to all of this, has arrived.

I value fairness and treating everyone equally regardless of their class status. I would be wary of any statements that make some users feel they will be treated less preferentially to others due to their class status, sowing distrust and conflict.

...

It's a meta comment about distrust and conflict. There has been several comments made on this thread about privileging some people over others. We're on the internet. Nobody knows who is what class. I suggest we not make those kinds of comments because they are controversial and will lead to arguments and distrust in the broader community if users think they will be treated unfairly because their class is being unprivileged.

...

I know everyone looks at statements that privilege some over others and thinks they are sketchy. (In what way are they privileged? How does that work? Does that mean we get suboptimal decision making so that some class-privileged person can have a seat of responsibility and privilege?)

Nix is very cutting edge, and we'd like to see more diversity. Diversity will come with growth. Controversy will stifle growth. These kinds of statements are going to cause controversy and conflict, stifling the growth that will result in diversity. Instead you may be able to rope in tokens of diversity, but you won't actually achieve real organic diversity because the growth just isn't there.

...

Can you explain what did you put in place to obtain that diversity, can you qualify a bit that diversity? I'm looking at statements like "There was BIPOC", etc. Also, how did you measure that diversity?

We grew. We advertised on Meetup.com. We let companies know we existed so they could host us. We let colleges know we existed so students could find us. We were open to everyone. We made every effort to help everyone who was trying to help themselves.

One of the things we did that helped: We treated people fairly. We did not talk about elevating anyone with privilege over others because of their class.

Who? Black (native, island, African), White (European, Russian, native (all ethnicities)), Asian (Korean, Chinese), Islanders, Native American, Transgendered, very old, very young. etc.

I'm highlighting this because it's a reoccurrence of the discussion Jon Ringer kept having in apparent bad faith.

view more: next ›