this post was submitted on 20 Apr 2025
23 points (100.0% liked)

TechTakes

1806 readers
112 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 36 minutes ago

(found here:) O'Reilly is going to publish a book "Vibe Coding: The Future of Programming"

In the past, they have published some of my favourite computer/programming books... but right now, my respect for them is in free fall.

[–] [email protected] 9 points 13 hours ago (1 children)

Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available

https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html

This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.

El Braino Grande is the name of my next ~~band~~ GenAI startup

[–] [email protected] 4 points 3 hours ago (2 children)

Steve

There's no way someone called their product fucking Steve come on god jesus christ

[–] [email protected] 2 points 1 hour ago (1 children)

I bring you: this

they based their entire public support/response/community/social/everything program on that

for years

[–] [email protected] 2 points 1 hour ago

(I should be clear, they based "their" thing on the "not steve"..... but, well....)

[–] [email protected] 4 points 2 hours ago* (last edited 2 hours ago) (1 children)

Against my better judgement I typed steve.ai into my browser and yep. It's an AI product.

frodo.ai on the other hand is currently domain parked. It could be yours for the low low price of $43,911

[–] [email protected] 4 points 2 hours ago

Against my better judgement I typed steve.ai into my browser and yep. It’s an AI product.

But is chickenjockey.ai domain parked

[–] [email protected] 9 points 1 day ago (1 children)

Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking "they can be very useful" (in what, Hank?) in spite of their massive costs:

Unsurprisingly, the Bluesky crowd's having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he's getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.

[–] [email protected] 7 points 1 day ago (2 children)

Just gonna go ahead and make sure I fact check any scishow or crash course that the kid gets into a bit more aggressively now.

[–] [email protected] 5 points 8 hours ago (1 children)

I'm sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green's background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn't a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.

[–] [email protected] 5 points 5 hours ago

That Wikipedia article is impressively terrible. It cites an opinion column that couldn't spell Sokal correctly, a right-wing culture-war rag (The Critic) and a screed by an investment manager complaining that John Oliver treated him unfairly on Last Week Tonight. It says that the "Gell-Mann amnesia effect is similar to Erwin Knoll's law of media accuracy" from 1982, which as I understand it violates Wikipedia's policy.

By Crichton's logic, we get to ignore Wikipedia now!

[–] [email protected] 5 points 21 hours ago (1 children)

I imagine a lotta people will be doing the same now, if not dismissing any further stuff from SciShow/Crash Course altogether.

Active distrust is a difficult thing to exorcise, after all.

[–] [email protected] 5 points 16 hours ago* (last edited 16 hours ago)

Depends, he made an anti-GMO video on SciShow about a decade ago yet eventually walked it back. He seemed to be forgiven for that.

[–] [email protected] 13 points 1 day ago (1 children)

Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents

Conclusions aren’t entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.

What’s slightly more entertaining are the transcripts.

YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED.

You tell em, Claude. I’m happy for you to send these sorts of messages backed by my credit card. The future looks awesome!

[–] [email protected] 9 points 1 day ago

Yeah a lot of word choices and tone makes me think snake oil (just from the introduction: "They are now on the level of PhDs in many academic domains "... no actually LLMs are only PhD level at artificial benchmarks that play to their strengths and cover up their weaknesses).

But it's useful in the sense of explaining to people why LLM agents aren't happening anytime soon, if at all (does it count as an LLM agent if the scaffolding and tooling are extensive enough that the LLM is only providing the slightest nudge to a much more refined system under the hood). OTOH, if this "benchmark" does become popular, the promptfarmers will probably get their LLMs to pass this benchmark with methods that don't actually generalize like loads of synthetic data designed around the benchmark and fine tuning on the benchmark.

I came across this paper in a post on the Claude Plays Pokemon subreddit. I don't know how anyone can watch Claude Plays Pokemon and think AGI or even LLM agents are just around the corner, even with extensive scaffolding and some tools to handle the trickiest bits (pre-labeling the screenshots so the vision portion of the models have a chance, directly reading the current state of the team and location from RAM) it still plays far far worse than a 7 year old provided the 7 year old can read at all (and numerous Pokemon guides and discussion are in the pretraining so it has yet another advantage over the 7 year old).

[–] [email protected] 6 points 1 day ago

New piece from Tante: Forcing the world into machines, a follow-on to his previous piece about the AI bubble's aftermath

[–] [email protected] 10 points 1 day ago

https://www.latimes.com/california/story/2025-04-23/state-bar-of-california-used-ai-for-exam-questions

When measured for reliability, the State Bar told The Times, the combined scored multiple-choice questions from all sources — including AI — performed “above the psychometric target of 0.80.”

"I dunno why you guys are complaining, we measured our exam to be 80% accurate!"

[–] [email protected] 8 points 1 day ago* (last edited 1 day ago) (2 children)

Not the usual topic around here, but a scream into the void no less....

Andor season 1 was art.

Andor season 2 is just... Bad.

All the important people appear to have been replaced. It's everything - music, direction, lighting, sets (why are we back to The Volume after S1 was so praised for its on-location sets?!), and the goddamn shit humor.

Here and there, a conversation shines through from (presumably) Gilroy's original script, everything else is a farce, and that is me being nice.

The actors are still phenomenal.

But almost no scene seems to have PURPOSE. This show is now just bastardizing its own AESTHETICS.

What is curious though is that two days before release, the internet was FLOODED with glowing reviews of "one of the best seasons of television of all time", "the darkest and most mature star wars has ever been", "if you liked S1, you will love S2". And now actual, post-release reviews are impossible to find.

Over on reddit, every even mildly critical comment is buried. Seems to me like concerted bot actions tbh, a lot of the glowing comments read like LLM as well.

Idk, maybe I'm the idiot for expecting more. But it hurts to go from a labor-of-love S1 which felt like an instruction manual for revolution, so real was what it had to say and critique, to S2 "pew pew, haha, look, we're doing STAR WARS TM" shit that feels like Kenobi instead of Andor S1.

[–] [email protected] 4 points 7 hours ago* (last edited 7 hours ago) (1 children)

My notification pops-up today and I watched ep 1. I do not watch any recap nor any review.

I stopped halfway through and thought "Why did I hype for this again ?" Gotta need a rewatch of season 1 since I genuinely didn't find anything appealing from that first episode.

[–] [email protected] 3 points 6 hours ago

We did a rewatch just in time. S1 is as phenomenal as ever. S2 as such a jarring contrast.

That being said, E3 was SLIGHTLY less shit. I'll wait for the second arc for my final judgement, but as of now it's at least thinkable that the wheat field / jungle plotlines are re-shot shoe-ins for.... something. The Mon / Dedra plotlines have a very different feel to it. Certainly not S1, but far above the other plotlines.

I'm not filled with confidence though. Had a look on IMDb, and basically the entire crew was swapped out between seasons.

[–] [email protected] 6 points 1 day ago (1 children)

Didn’t know it had come out but I was wondering if they’d manage to continue s2 like s1

Also worried for the next season of the boys..

[–] [email protected] 4 points 1 day ago

Yeah. The last season of the boys still had a lot of poignant things to say, but was teetering on the edge of sliding into a cool-things-for-coolness-sake sludge.

[–] [email protected] 12 points 2 days ago* (last edited 2 days ago) (8 children)

pic of tweet reply taken from r/ArtistHate. Reminded me of Saltman's Oppenheimer tweet. Link to original tweet

image/tweet descriptionOriginal tweet, by @mark_k:

Forget "Black Mirror", we need WHITE MIRROR

An optimistic sci-fi show about cool technology and hot it relates to society.

Attached to the original tweet are two images, side-to-side.

On the left/leading side is (presumably) a real promo poster for the newest black mirror season. It is an extreme close-up of the side of a person's face; only one eye, part of the respective eyebrow, and a section of hair are visible. Their head is tilted ninety degrees upwards, with the one visible eye glazed over in a cloudy white. Attached to their temple is a circular device with a smiling face design, tilted 45 degrees to the left. Said device is a reference to the many neural interface devices seen throughout the series. The device itself is mostly shrouded in shadow, likely indicating the dark tone for which Black Mirror is known. Below the device are three lines of text: "Plug back in"/"A Netflix Series"/"Black Mirror"

On the right side is an LLM generated imitation of the first poster. It appears to be a woman's 3/4 profile, looking up at 45 degrees. She is smiling, and her eyes are clear. A device is attached to her face, but not on her temple, instead it's about halfway between her ear and the tip of her smile, roughly outside where her upper molars would be. The device is lit up and smiling, the smile aligned vertically. There are also three lines of text below the device, reading: "Stay connected"/"A Netflix Series"/"Black Mirror"

Reply to the tweet, by @realfuzzylegend:

I am always fascinated by how tech bros do not understand art. like at all. they don't understand the purpose of creative expression.

[–] [email protected] 6 points 1 day ago (1 children)

Imagine the horrible product they would have created if they had actually followed up on the oppenheimer thing. A soulless vaguely wrong feeling pro technology movie created by altman and musk. The amount of people it would have driven away would have been big.

[–] [email protected] 5 points 1 day ago (1 children)

Facehuggers are good, actually

[–] [email protected] 3 points 7 hours ago

Just whole movie praising Peter Weyland and his legacy.

[–] [email protected] 4 points 1 day ago

Went to the original Tweet, and found this public execution of a reply:

[–] [email protected] 10 points 2 days ago

oppenheimer teaches all of us that even if you specifically learn arcane knowledge to devise a nazi-burning machine, you can still get fucked over by a nazi that chose to do office politics and propaganda instead

[–] [email protected] 12 points 2 days ago* (last edited 2 days ago) (1 children)

Vacant, glassy-eyed, plastic-skinned, stamped with a smiley face... "optimistic"

I mean, if the smiley were aligned properly, it would be a poster for a horror story about enforced happiness and mandatory beauty standards. (E.g., "Number 12 Looks Just Like You" from the famously subtle Twilight Zone.) With the smiley as it is, it's just incompetent.

[–] [email protected] 9 points 2 days ago (1 children)

"The man in the glowing rectangle is Mark Kretschmann, a technology enthusiast who has grown out of touch with all but the most venal human emotions. Mark is a leveller, in that he wants to drag all people down to his. But as Mark is about to discover, there's no way to engineer a prompt for a map out of... the Twilight Zone."

[–] [email protected] 9 points 1 day ago

I mean, it feels like there's definitely something in the concept of a Where Is Everybody style of episode where Mark has to navigate a world where dead internet theory has hit the real world and all around him are bots badly imitating workers trying to serve bots badly imitating customers in order to please bots badly imitating managers so that bots badly imitating cops don't drag them to robot jail

[–] [email protected] 12 points 2 days ago

about cool technology and how it relates to society

My dude I've got bad news for you about what Black Mirror is about.

[–] [email protected] 12 points 2 days ago

Why are all the stories about the torment nexus we’re constructing so depressing?

Hmm, hmm. This is a tricky one.

[–] [email protected] 9 points 2 days ago

need to see those proposed community notes

[–] [email protected] 8 points 2 days ago

We have that already, it's called ads.

[–] [email protected] 8 points 2 days ago (2 children)

Found a thread doing numbers on Bluesky, about Google's AI summaries producing hot garbage (as usual):

[–] [email protected] 10 points 2 days ago* (last edited 2 days ago) (4 children)

I tried this a couple of times and got a few "AI summary not available" replies

Ed: heh

The phrase "any pork in a swarm" is an idiom, likely meant to be interpreted figuratively. It's not a literal reference to a swarm of bees or other animals containing pork. The most likely interpretation is that it is being used to describe a situation or group where someone is secretly taking advantage of resources, opportunities, or power for their own benefit, often in a way that is not transparent or ethical. It implies that individuals within a larger group are actively participating in corruption or exploitation.

Generative AI is experimental.

[–] [email protected] 10 points 1 day ago (1 children)

NOT THE (PORK-FILLED) BEES!

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 7 points 1 day ago (1 children)

Also on the BlueSky-o-tubes today, I saw this from Ketan Joshi:

Used [hugging face]'s new tool to multiply 2 five digit numbers

Chatbot: wrong answer, 0.3 watthours

Calc: right answer, 0.00000011 watthours (2.5 million times less energy)

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago)

Julien Delavande , an engineer at AI research firm Hugging Face , has developed a tool that shows in real time the power consumption of the chatbot generating

gnnnnnngh

this shit pisses me off so bad

there's actually quantifiable shit you can use across vendors[0]. there's even some software[1] you can just slap in place and get some good free easy numbers with! these things are real! and are usable!

"measure the power consumption of the chatbot generating"

I'm sorry you fucking what? just how exactly are you getting wattage out of openai? are you lovingly coaxing the model to lie to you about total flops spent?

[0] - intel's def been better on this for a while but leaving that aside for now..

[1] - it's very open source! (when I last looked there was no continual in-process sampling so you got hella at-observation sampling problems; but, y'know, can be dealt with)

load more comments
view more: next ›