this post was submitted on 27 May 2025
1933 points (99.4% liked)

Programmer Humor

23507 readers
1413 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] wischi 4 points 1 day ago (1 children)

I don't think it's cherry picking. Why would I trust a tool with way more complex logic, when it can't even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don't have the capability to plan and think when they lose tic tac toe games.

[–] [email protected] -4 points 1 day ago (2 children)

Why would I trust a drill press when it can’t even cut a board in half?

[–] wischi 12 points 1 day ago* (last edited 1 day ago) (1 children)

A drill press (or the inventors) don't claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them "reasoning" models, but still fail to beat my niece in tic tac toe, which by the way doesn't have a PhD in anything 🤣

LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

If they can't handle things they even saw during training (but sparsely, like tic tac toe) it wouldn't be able to produce code you should use in production. I wouldn't trust any junior dev that doesn't set their O right next to the two Xs.

[–] [email protected] 1 points 1 day ago (1 children)

Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

[–] wischi 3 points 1 day ago

Totally agree with that and I don't think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That's why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

[–] [email protected] 2 points 1 day ago (1 children)

It’s futile even trying to highlight the things LLMs do very well as Lemmy is incredibly biased against them.

[–] wischi 3 points 1 day ago (1 children)

I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.

Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

[–] [email protected] 0 points 20 hours ago (1 children)

"I'm not again LLMs I just never say anything useful about them and constantly point out how I can't use them." The other guy is right and you just prove his point.

[–] wischi 1 points 17 hours ago* (last edited 17 hours ago)

I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.

LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.

So there is value in LLMs, if you use them for their knowledge.