YourNetworkIsHaunted

joined 1 year ago
[–] [email protected] 4 points 4 hours ago

Nah, to keep with the times it should be a matte black Tesla Model 3 with the sith empire insignia on top and a horn that plays the imperial march.

[–] [email protected] 4 points 2 days ago (2 children)

Behind the Bastards just wrapped their four-part series on the Zizzians, which has been a fun trip. Nothing like seeing the live reactions of someone who hasn't been at least a little bit plugged into the whole space for years.

I haven't finished part 4, but so far I've deeply appreciated Robert's emphasis on how the Zizzian nonsense isn't that far outside the bounds of normal Rationalist nonsense, and the Rationalist movement itself has a long history as a kind of cult incubator, even if Yud himself hasn't fully leveraged his influence over a self-selecting high-control group.

Also the recurring reminders of the importance of touching grass and talking to people who haven't internet-poisoned themselves with the same things you have.

[–] [email protected] 6 points 3 days ago

Script kiddies at least have the potential to learn what they're doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say "fetch."

There's a headline in there somewhere. Vibe Coders: stop trying to make fetch happen

[–] [email protected] 21 points 5 days ago (1 children)

Get David Graeber's name out ya damn mouth. The point of Bullshit Jobs wasn't that these roles weren't necessary to the functioning of the company, it's that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn't exist

The idea was not that "these people should be fired to streamline efficiency of the capitalist orphan-threshing machine".

[–] [email protected] 10 points 5 days ago

This is how you know that most of the people working in AI don't think AGI is actually going to happen. If there was any chance of these models somehow gaining a meaningful internal experience then making this their whole life and identity would be some kind of war crime.

[–] [email protected] 7 points 6 days ago

New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?

[–] [email protected] 1 points 6 days ago

I'm reminded of my previous comment back on an unrelated subreddit talking about the Eye of Argon. Obviously that wasn't as structural insane as My Immortal but I think the same principle holds to a degree:

"With a decent editor and several further drafts it could have been a solid, fun, entirely forgettable Conan pastiche. Instead, it's the Eye of Argon."

[–] [email protected] 10 points 1 week ago (1 children)

I mean, it does amount to the US government - aka "the confederation of racist dunces" - declaring their intention to force the LLM owners - all US-based companies (except maybe those guys out of China, a famous free speech haven) - to make sure their model outputs align with their racist dunce ideology. They may not have a viable policy in place to effect that at this point, but it would be a mistake to pretend they're not going to implement one. The best case scenario is that it ends up being designed and implemented incompetently enough that it just crashes the AI markets. The worst case scenario is that we get a half-dozen buggy versions of Samaritan from Person of Interest but with a hate-boner for anyone with a vaguely Hispanic name. A global autocomplete that produces the kind of opinions that made your uncle not get invited to any more family events. Neither scenario is one that you would want to be plugged into and reliant on, especially if you're otherwise insulated by national borders and a whole Atlantic ocean from the worst of America's current clusterfuck.

[–] [email protected] 2 points 1 week ago

This reminded me that I never actually finished reading it. By which I mean I never finished reading the Wizard of Woah!'s excellent read-along thread over at SpaceBattles. About 75 pages in (what am I doing with my life) someone shows up to defend the fic's quality from the snark and it is kind of a fascinating collision between LW and more normal internet culture, circa 2015.

[–] [email protected] 4 points 1 week ago

There's never a bad time to remember one of the foundational texts of academic sneerery.

[–] [email protected] 7 points 1 week ago (1 children)

Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a layman's perspective: "a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience." (Aside: it doesn't do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoft's paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasn't important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than "among other things?" What does it mean to be "more able to learn from experience" or "more able to plan" that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).

I don't know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like there's more room to question the definition itself than we've been taking advantage of. If nothing else the vagueness means that we haven't really gotten any more specific than "the brain's ability to brain good."

[–] [email protected] 5 points 1 week ago

These are AI bros, and should be assumed to be both racist and lazy. Of course they kept it.

 

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›