this post was submitted on 27 May 2025
1920 points (99.4% liked)

Programmer Humor

23507 readers
1363 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 7 hours ago (2 children)

Write tests and run them, reiterate until all tests pass.

[–] AnotherPenguin 14 points 4 hours ago

Bogosort with extra steps

[–] [email protected] 8 points 6 hours ago* (last edited 6 hours ago) (1 children)

That doesn't sound viby to me, though. You expect people to actually code? /s

[–] [email protected] 3 points 3 hours ago (2 children)

You can vibe code the tests too y'know

[–] [email protected] 1 points 28 minutes ago

Return "works";

Am I doikg this correctly?

[–] [email protected] 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

You know, I'd be interested to know what the critical size you can get to with that approach is before it becomes useless.

[–] [email protected] 1 points 44 minutes ago* (last edited 43 minutes ago)

It can become pretty bad quickly, with just a small project with only 15-20 files. I've been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.

And while incredibly impressive how it's creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It'll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.

Then you try to reel it in, and it continues to go rampant. And for me, that's when I either take the wheel or roll back.

I highly recommend every programmer watch it in action.

[–] [email protected] 48 points 22 hours ago* (last edited 22 hours ago) (1 children)

Watching the serious people trying to use AI to code gives me the same feeling as the cybertruck people exploring the limits of their car. XD

"It's terrible and I should hate it, but gosh it it isn't just so cool"

I wish i could get so excited over disappointing garbage

[–] [email protected] 3 points 1 hour ago (1 children)

You definitely could use AI to code, the catch is you need to know how to code first.

I use AI to write code for mundane tasks all the time. I also review and integrate the code myself.

[–] [email protected] 1 points 1 hour ago

The AI code my “expert in a related but otherwise not helpful field” coworker writes helps me have a lot of extra work to do!

[–] [email protected] 93 points 1 day ago (2 children)

Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

[–] [email protected] 29 points 16 hours ago (1 children)

And that's what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it's imitating, but with zero understanding of why the original looked that way.

[–] [email protected] -4 points 6 hours ago* (last edited 6 hours ago) (1 children)

I mean, there's about a billion ways it's been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That's how I know I and other humans have understanding, after all.

What it's not is aligned to care about anything other than making plausible-looking text.

[–] [email protected] 7 points 5 hours ago (1 children)

Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.

[–] [email protected] 0 points 2 hours ago* (last edited 2 hours ago)

Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

You got the "originality" part there, right? I'm talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It's true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

[–] [email protected] 24 points 22 hours ago (1 children)

Well I've got the name for my autobiography now.

[–] runeko 4 points 9 hours ago

"Specifically Annoying" or "Plausible Bullshit"? I'd buy the latter.

[–] [email protected] 83 points 1 day ago (3 children)

All programs can be written with on less line of code. All programs have at least one bug.

By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.

One day AI will get there.

[–] [email protected] 8 points 10 hours ago

The ideal code is no code at all

[–] [email protected] 12 points 15 hours ago

On one line of code you say?

*search & replaces all line breaks with spaces*

[–] [email protected] 11 points 21 hours ago (2 children)

All programs can be written with on less line of code. All programs have at least one bug.

The humble "Hello world" would like a word.

[–] [email protected] 20 points 15 hours ago (2 children)

Just to boast my old timer credentials.

There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

It has just one assembly code instruction: a BR 14, which means basically ‘return’.

The first version was bugged and IBM had to issue a PTF (patch) to fix it.

[–] [email protected] 9 points 8 hours ago (1 children)

Okay, you can't just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

[–] [email protected] 2 points 6 hours ago

It didn’t clear the return code. In mainframe jobs, successful executions are expected to return zero (in the machine R15 register).

So in this case fixing the bug required to add an instruction instead of removing one.

[–] [email protected] 2 points 6 hours ago

Reminds me of how in some old Unix system, /bin/true was a shell script.

...well, if it needs to just be a program that returns 0, that's a reasonable thing to do. An empty shell script returns 0.

Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. ...never mind that this was a program that literally does nothing.

[–] [email protected] 9 points 16 hours ago (1 children)

You can fit an awful lot of Perl into one line too if you minimize it. It'll be completely unreadable to most anyone, but it'll run

[–] [email protected] 2 points 2 hours ago

Qrpff says hello. Or, rather, decrypts DVD movies in 472 bytes of code, 531 if you want the fast version that can do it in real time. The Wikipedia article on it includes the full source code of both.

https://wikipedia.org/wiki/Qrpff

[–] [email protected] 258 points 1 day ago (17 children)

Code that does not work is just text.

[–] [email protected] 3 points 15 hours ago

All non-code-text is just code for a yet undiscovered programming language

load more comments (16 replies)
[–] [email protected] 46 points 1 day ago (2 children)

I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

load more comments (2 replies)
[–] [email protected] 35 points 1 day ago* (last edited 1 day ago) (3 children)

Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

[–] [email protected] 4 points 9 hours ago

I have a friend who swears by llms, he sais it helps him a lot. I once watched him do it, and the experience was exactly the same you described. He wasted couple of hours fighting with bullshit generator just to do everything himself anyway. I asked him wouldn't it be better to not waste the time, but he didn't really saw the problem, he gaslit himself that fighting with the idiot machine helped.

[–] [email protected] 21 points 1 day ago (2 children)

It confidently gave me one

IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.

[–] [email protected] 1 points 2 hours ago

Sycophants.

[–] [email protected] 9 points 22 hours ago (4 children)

They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

load more comments (4 replies)
load more comments (1 replies)
[–] [email protected] 70 points 1 day ago (29 children)

To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

[–] [email protected] 4 points 13 hours ago (1 children)

I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like "in plotly how do I do this and that to the xaxis" etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

[–] [email protected] 1 points 3 hours ago (1 children)

Why do you want AI to save you for learning and understanding the tools you use?

[–] [email protected] 1 points 1 hour ago* (last edited 1 hour ago)

If you do it through AI you can still learn. After all I go through the code to understand what is going on. And for not so complex tasks LLMs are good at commenting the code (though it can bullshit from time to time so you have to approach it critically).

But anyways the stuff I ask LLMs are generally just one off tasks. If I need to use something more frequently, I do prefer reading stuff for more in depth understanding.

load more comments (28 replies)
load more comments
view more: next ›