this post was submitted on 24 Mar 2025
27 points (100.0% liked)

TechTakes

1785 readers
89 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 2 weeks ago (5 children)

The USA plans to migrate SSA's code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/

The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.

“This is an environment that is held together with bail wire and duct tape,” the former senior SSA technologist working in the office of the chief information officer tells WIRED. “The leaders need to understand that they’re dealing with a house of cards or Jenga. If they start pulling pieces out, which they’ve already stated they’re doing, things can break.”

SSN's pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:

SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.

What could possibly go wrong? I'm sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:

You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.

Bonus -- Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i

[–] [email protected] 12 points 2 weeks ago (1 children)

Anecdote: I gave up on COBOL as a career after beginning to learn it. The breaking point was learning that not only does most legacy COBOL code use go-to statements but that there is a dedicated verb which rewrites go-to statements at runtime and is still supported on e.g. the IBM Enterprise COBOL for z/OS platform that SSA is likely using: ALTER.

When I last looked into this a decade ago, there was a small personal website last updated in the 1990s that had advice about how to rewrite COBOL to remove GOTO and ALTER verbs; if anybody has a link, I'd appreciate it, as I can no longer find it. It turns out that the best ways of removing these spaghetti constructions involve multiple rounds of incremental changes which are each unlikely to alter the code's behavior. Translations to a new language are doomed to failure; even Java is far too structured to directly encode COBOL control flow, and the time would be better spent on abstract specification of the system so that it can be rebuilt from that specification instead. This is also why IBM makes bank selling COBOL emulators.

[–] [email protected] 10 points 2 weeks ago (4 children)

Yeah I'm sure DOGE doesn't appreciate that structured programming hasn't always been a thing. There was such a cultural backlash against it that GOTO is still a dirty word to this day, even in code where it makes sense, and people will contort their code's structure to avoid calling it.

The modernization plan I linked above talks about the difficulty of refactoring in high level terms:

It is our experience that the cycle of workarounds adds to our total technical debt – the amount of extra work that we must do to cope with increased complexity. The complexity of our systems impacts our ability to deliver new capabilities. To break the cycle of technical debt, a fundamental, system-wide replacement of code, data, and infrastructure is required

While I've never dealt with COBOL I have dealt with a fair amount of legacy code. I've seen a ground up rewrites go horribly horribly due to poor planning (basically there were too many office politics involved and not enough common sense). I think either incremental or ground up can make sense, but you just have to figure out what makes sense for the given system (and even ground up rewrites should be incremental in some respects).

load more comments (4 replies)
[–] [email protected] 11 points 2 weeks ago (1 children)
[–] [email protected] 13 points 2 weeks ago

There is so much bad going on that even just counting the tech-adjacent stuff I have to consciously avoid spamming this forum with it constantly.

load more comments (3 replies)
[–] [email protected] 19 points 3 weeks ago (5 children)

When Netflix inevitably makes a true-crime Ziz movie, they should give her a 69 Dodge Charger and call it The Dukes of InfoHazard

load more comments (5 replies)
[–] [email protected] 17 points 3 weeks ago (8 children)

Dem pundits go on media tour to hawk their latest rehash of supply-side econ - and decide to break bread with infamous anti-woke "ex" race realist Richard Hanania

A quick sample of people rushing to defend this:

[–] [email protected] 11 points 3 weeks ago (2 children)

I almost forgot how exhausting TW was.

load more comments (2 replies)
[–] [email protected] 11 points 3 weeks ago

tracing going all in on left wing people aren't real they can't hurt you

load more comments (6 replies)
[–] [email protected] 15 points 3 weeks ago (14 children)

LW discourages LLM content, unless the LLM is AGI:

https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

Never change LW, never change.

[–] [email protected] 10 points 3 weeks ago (1 children)

Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).

load more comments (1 replies)
load more comments (13 replies)
[–] [email protected] 13 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

AI slop in Springer books:

Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity.  Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7

From page 25: "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice..."

None of this book can be considered trustworthy.

https://mastodon.social/@JMarkOckerbloom/114217609254949527

Originally noted here: https://hci.social/@peterpur/114216631051719911

[–] [email protected] 17 points 3 weeks ago (4 children)

I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.

[–] [email protected] 12 points 3 weeks ago (6 children)

On the other hand, your book gains value by being published in 2021, i.e. before ChatGPT. Is there already a nice term for "this was published before the slop flood gates opened"? There should be.

(I was recently looking for a cookbook, and intentionally avoided books published in the last few years because of this. I figured that the genre is a too easy target for AI slop. But that not even Springer is safe anymore is indeed very disappointing.)

load more comments (6 replies)
load more comments (3 replies)
[–] [email protected] 13 points 2 weeks ago (8 children)
[–] [email protected] 13 points 2 weeks ago (1 children)

it doesn't look anything like him? not that he looks much like anything himself but come on

load more comments (1 replies)
[–] [email protected] 10 points 2 weeks ago

sam altman is greentexting in 2025

Ugh. Now I wonder, does he have an actual background as an insufferable imageboard edgelord or is he just trying to appear as one because he thinks that's cool?

load more comments (6 replies)
[–] [email protected] 13 points 2 weeks ago

Angela Collier has a wonderfully grumpy video up, why functioning governments fund scientific research. Choice sneer at around 32:30:

But what do I know? I'm not a medical doctor but neither is this chucklefuck, and people are listening to him. I don't know. I feel like this is [sighs, laughs] I always get comments that tell me, "you're being a little condescending," and [scoffs] yeah. I mean, we can check the dictionary definition of "condescending," and I think I would fit into that category. [Vaccine deniers] have failed their children. They are bad parents. One in four unvaccinated kids who get measles will die. They are playing Russian roulette with their child's life. But sure, the problem is I'm being, like, a little condescending.

[–] [email protected] 12 points 2 weeks ago (13 children)
[–] [email protected] 13 points 2 weeks ago

all of the subculture YouTubers I watch are colliding with the weirdo cult I know way too much about and I hate it

[–] [email protected] 11 points 2 weeks ago

I like the video, but I'm a little bothered that she misattributes su3su2u1's critique to Dan Luu, who makes it very clear he did not write it:

These are archived from the now defunct su3su2u1 tumblr. Since there was some controversy over su3su2u1's identity, I'll note that I am not su3su2u1 and that hosting this material is neither an endorsement nor a sign of agreement.

load more comments (11 replies)
[–] [email protected] 12 points 3 weeks ago (3 children)

Stumbled across some AI criti-hype in the wild on BlueSky:

The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.

On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:

load more comments (3 replies)
[–] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

https://news.ycombinator.com/item?id=43515426

https://github.com/typedgrammar/typed-japanese

This project is still in very early stages and heavily relies on LLM-generated grammar rules, which may occasionally contain hallucinations or inaccuracies.

お前はもう死んでいる


Edit: from the English version of this project:

export type Pronoun = 'I' | 'you' | 'he' | 'she' | 'it' | 'we' | 'they' | 'me' | 'him' | 'her' | 'us' | 'them';

Ah yes, definitely the only pronouns in all of English

[–] [email protected] 10 points 2 weeks ago

Using an LLM to shit out grammar for an old school symbolic language model is a poetic ouroboros of AI circlejerking.

load more comments (1 replies)
[–] [email protected] 12 points 2 weeks ago (1 children)

Discovered an animation sneering at the tech oligarchs on Newgrounds - I recommend checking it out. Its sister animation is a solid sneer, too, even if it is pretty soul crushing.

load more comments (1 replies)
[–] [email protected] 11 points 2 weeks ago (3 children)

holy shitting fuck, just got the tip of the year in my email

Simplify Your Hiring with AI Video Interviews

Interview, vet, and hire thousands of job applicants through our AI-powered video interviewer in under 3 minutes & 95 languages.

"AI-Video Vetting That Actually Works"

it's called kerplunk.com, a domain named after the sound of your balls disappearing forever

the market is gullible recruiters

founder is Jonathan Gallegos, his linkedin is pretty amazing

other three top execs don't use their surnames on Kerplunk's about page, one (Kyle Schutt) links to a linkedin that doesn't exist

for those who know how Dallas TX works, this is an extremely typical Dallas business BS enterprise, it's just this one is about AI not oil or Texas Instruments for once

load more comments (3 replies)
[–] [email protected] 11 points 2 weeks ago

In other news, the Open Source Intiative's publicly bristled against the EU's attempt to regulate AI, to the point of weakening said attempts.

Tante, unsurprisingly, is not particularly impressed:

Thank you OSI. To protect the purity of your license – which I do not consider to be open source – you are working towards making it harder for regulators to enforce certain standards within the usage of so-called “AI” systems. Quick question: Who are you actually working for? (I know, it is corporations)

The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.

You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EU's demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSS's name.

Considering FOSS's complete failure to fight corporate encirclement of their shit, this isn't particularly surprising.

[–] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Annoying nerd annoyed annoying nerd website doesn't like his annoying posts:

https://news.ycombinator.com/item?id=43489058

(translation: John Gruber is mad HN doesn't upvote his carefully worded Apple tonguebaths)

JWZ: take the win, man

load more comments (1 replies)
[–] [email protected] 10 points 3 weeks ago (10 children)

Redis guy AntiRez issues a heartfelt plea for the current AI funders to not crash and burn when the LLM hype machine implodes but to keep going to create AGI:

https://antirez.com/news/148

Neither HN nor lobste.rs are very impressed

[–] [email protected] 10 points 3 weeks ago

Ultra-rare footage of orange site having a good take for once:

Top-notch sneer from lobsters' top comment, as well (as of this writing):

You want my opinion, I expect AntiRez' pleas to fall on deaf ears. The AI funders are only getting funded due to LLM hype - when that dies, investors' reason to throw money at them dies as well.

load more comments (9 replies)
load more comments
view more: next ›