this post was submitted on 12 Apr 2025
1271 points (98.5% liked)

Programmer Humor

22726 readers
1252 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 29 points 1 week ago* (last edited 1 week ago) (3 children)

AI is very very neat but like it has clear obvious limitations. I'm not a programmer and I could tell you tons of ways I tripped Ollama up already.

But it's a tool, and the people who can use it properly will succeed.

I'm not saying ita a tool for programmers, but it has uses

[โ€“] [email protected] 25 points 1 week ago (3 children)

I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn't big enough to understand an entire project.

That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.

[โ€“] [email protected] 14 points 1 week ago (1 children)

I've used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.

[โ€“] [email protected] 5 points 1 week ago (1 children)

I most often just get it straight up misunderstanding how the test framework itself works, but I've definitely had it make strange decisions like that. I'm a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.

[โ€“] [email protected] 4 points 1 week ago

Oh, I am right there with you. I don't want to write tests because they're tedious, so I backfill with the AI at least starting me off on it. It's a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.

[โ€“] [email protected] 4 points 1 week ago (2 children)

Isn't writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won't make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there's a risk that it'll just write a test case that makes sure the code contains the wrong behavior.

Okay, it might still be okay for regression testing, but you're still missing most of the benefit you'd get by writing the tests manually. Unless you only care about closing tickets, that is.

[โ€“] [email protected] 5 points 1 week ago

"Unless you only care about closing tickets, that is."

Perfect. I'll use it for tests at work then.

[โ€“] [email protected] 2 points 1 week ago* (last edited 1 week ago)

I've used it most extensively for non-professional projects, where if I wasn't using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the "hand holding" I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.

I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it's validity and expanding from there.

[โ€“] [email protected] 2 points 1 week ago

It's great for verbose log statements

[โ€“] [email protected] 9 points 1 week ago (1 children)

Funny. Every time someone points out how god awful AI is, someone else comes along to say "It's just a tool, and it's good if someone can use it properly." But nobody who uses it treats it like "just a tool." They think it's a workman they can claim the credit for, as if a hammer could replace the carpenter.

Plus, the only people good enough to fix the problems caused by this "tool" don't need to use it in the first place.

[โ€“] [email protected] 2 points 1 week ago

But nobody who uses it treats it like "just a tool."

I do. I use it to tighten up some lazy code that I wrote, or to help me figure out a potential flaw in my logic, or to suggest a "better" way to do something if I'm not happy with what I originally wrote.

It's always small snippets of code and I don't always accept the answer. In fact, I'd say less than 50% of the time I get a result I can use as-is, but I will say that most of the time it gives me an idea or puts me on the right track.

[โ€“] [email protected] 2 points 1 week ago

This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.

If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.