this post was submitted on 27 May 2025
2034 points (99.5% liked)

Programmer Humor

23539 readers
1869 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 147 points 3 days ago* (last edited 3 days ago) (5 children)

The image is taken from Zhihu, a Chinese Quora-like site.

The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.

But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...

[–] [email protected] 72 points 3 days ago (7 children)

I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.

[–] [email protected] 2 points 1 day ago (1 children)

Can't you just send prints to serial?

[–] [email protected] 1 points 1 day ago

Yes, that was the plan, which ChatGPT refused to do

[–] purplemonkeymad 55 points 2 days ago

Sounds like it's perfectly replicated the help forums it was trained on.

[–] [email protected] 17 points 3 days ago (1 children)

I used ChatGPT to help me make a package with SUSE's Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn't do anything. So I had to use my own brain, ugh.

[–] [email protected] 8 points 2 days ago

chatgpt is worse among biggest chatbots with writing codes. From my experience Deepseek > Perplexity > Gemini > Claude.

[–] [email protected] 5 points 2 days ago (1 children)

Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it's not the correct concept to begin with

[–] [email protected] 6 points 2 days ago

That’s exactly what most junior devs do when stuck. They rehash the same solution over and over and it almost seems like that llms trained on code bases infer that behavior from commit histories etc.

It almost feels like on of those “we taught him these tasks incorrectly as a joke” scenarios

[–] [email protected] 3 points 2 days ago

That's what tends to happen

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging

[–] [email protected] 1 points 2 days ago

The latter is what I tried, and also kinda wanted ChatGPT to do, which it refused

[–] [email protected] 2 points 2 days ago

Did it at least try puts?

[–] [email protected] 19 points 3 days ago (3 children)

It's pretty random in terms of what is or isn't doable.

For me it's a big performance booster because I genuinely suck at coding and don't do too much complex stuff. As a "clean up my syntax" and a "what am I missing here" tool it helps, or at least helps in figuring out what I'm doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn't do without one (or at least without getting berated by some online dick who doesn't think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

How much of a benefit it's for a professional I couldn't tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

I don't think it's useless, but if you ask it to do something by itself you can't trust that it'll work without singificant additional effort.

[–] [email protected] 13 points 3 days ago (1 children)

A lot of words to just say vibe coding

[–] [email protected] 1 points 2 days ago (1 children)

Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the "vibe coding" name we moved to the assumption that all AI assistance is vibe coding, but in practice there's the percentage of what you do that you know how to do, the percentage you vibe code because you can't figure it out yourself off the top of your head and the percentage you just can't do without researching because the LLM can't do it effectively or the stuff it can do is too crappy to use as part of something else.

I think if the assumption is you should just "git gud" and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there's a whole bunch of skill ranges where that is actually helpful for some stuff.

If you want to deny that there's a difference between that and just making code soup by asking a language model to build you entire pieces of software... well, then you're going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you're wrong and use that to pretend you're wrong about the whole thing.

This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that's a bad outcome.

[–] [email protected] 1 points 1 day ago (1 children)

People seem to disagree but I like this. This is AI code used responsibly. You're using it to do more, without outsourcing all your work to it and you're actively still trying to learn as you go. You may not be "good at coding" right now but with that mindset you'll progress fast.

[–] [email protected] 1 points 1 day ago

I think the effects of it are... a bit more nuanced than that, perhaps?

I can definitely tell there are places where I'm plugging knowledge gaps fast. I just didn't know how to do a thing, I did it AI-assisted once or twice and I don't need to be AI assisted anymore because I understood how it works now. Cool, that. And I wouldn't have learned it from traditional sources because asking in public support areas would have led to being told I suck and should read the documentation and/or to a 10 video series on Youtube where you can watch some guy type for seven hours.

But there are also places where AI assistance is never going to fill the blanks for me, you know? Larger trends, good habits, technical details or best practices that just aren't going to come up from keeping a smart autocorrect that can explain why something was wrong.

Honestly, in those spaces the biggest barrier is still what it was: I don't necessarily want to "progress" on those areas because I don't need it and it's not my job. I can automate a couple things I didn't know how to automate before, and that's alright. For the rest, I will probably live with the software someone else has made when it exists.

The problem is hubris, right? I know what I don't know and which parts I care to learn. That's fine. Coding assistant LLMs are a valid tool for someone like that to slightly expand their reach and I presume there's a lot of people like that. It's the random entrepeneurs who have been sold by big corpos that they don't need a real programmer to build their billion-dollar app anymore that are going to crash and burn and may take some of the software industry down with them.

[–] [email protected] 8 points 2 days ago (1 children)

It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

It’s very useful for throwaway work like writing scripts and automations.

It’s useful not but a 10x multiplier like all the CEOs claim it is.

[–] [email protected] 2 points 2 days ago (2 children)

Fully agreed. Everybody is betting it'll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn't any guarantee that it'll get to where the corpos are assuming it already is.

Which is not the same as not having better autocomplete/spellcheck/"hey, how do I format this specific thing" tools.

[–] [email protected] 1 points 1 day ago (1 children)

I think the main barriers are context length (useful context. GPT-4o has "128k context" but it's mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of "how to split a string in bash" or "how to set up validation in spring boot". We might "get there", but it'll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.

[–] [email protected] 1 points 1 day ago

I don't know, some of these guys have acccess to a LOT of code, and even more debate about what those good codebases entail.

I think the other issue is more relevant. Even 128K tokens is not enough for something really big, and the memory and processing costs for that do skyrocket. People are trying to work around it with draft models and summarization models, so they try to pick out the relevant parts of a codebase in one pass and then base their code generation just on that, and... I don't think that's going to work reliably at scale. The more chances you give a language model to lose their goddamn mind and start making crap up unsupervised the more work it's going to be to take what they spit out and shape it into something reasonable.

[–] [email protected] 4 points 2 days ago

Yeah, it’s still super useful.

I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.

I get a bit more done faster, that’s a solid value proposition.

[–] vivendi 5 points 2 days ago (1 children)

It's not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts

[–] [email protected] 5 points 2 days ago

Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn't have the memory of a goldfish.

And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I'm less concerned about hating the entire concept than I am about making sure they don't figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.

We've been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they'll enshittify the freely available alternatives they control and clamp down as much as possible.

[–] [email protected] 9 points 2 days ago

I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.

[–] [email protected] 0 points 1 day ago

I used Claude 3.7 to upgrade my eslint configs to flat and upgrade from v7 to v9 with Roo Code and it did it perfectly

[–] [email protected] 4 points 2 days ago (2 children)

I wouldn't say it's accurate that this was a "mechanical" upgrade, having done it a few times. They even have a migration tool which you'd think could fully do the upgrade but out of the probably 4-5 projects I've upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms

[–] [email protected] 1 points 2 days ago (1 children)

No, still "perfect" for llms. There's nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it

[–] [email protected] 1 points 2 days ago

I searched this issue and didn't find anything very helpful. The new config format can be done many slightly different ways and there are a lot of variables in how your plugins and presets can be. It made perfect sense to me that the LLM couldn't do this upgrade for op. Since one tiny mistake and it won't work at all and usually gives a weird error.

[–] [email protected] 0 points 2 days ago* (last edited 2 days ago) (1 children)

Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don't have a lot of experience in TypeScript, besides one project in my college webdev class.

So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what "real programmers" would use it for...

People here say boilerplate code, but honestly I don't quite recall the last time I need to write a lot of boilerplate code.

I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?

People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It's just a time saver. I have never used it like "write me a script or a function that does x" like some people do. I am not interested in that as it seems like a sad crutch that I'd need to customize so much anyway that I may as well skip that step.

Having said that, I'm noticing the copilot autocomplete seems to be getting worst over time. I'm not sure why it worsening, but if it ever feels not worth it anymore I'll drop it, no harm no foul. The binary thinkers tend to think you're either a good dev who despises all forms of AI or you're an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.

PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would've done all or almost all the work for you... It fully didn't work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.

I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.

Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.


I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.

So I just gived up and read the doc.

[–] [email protected] 2 points 2 days ago

Gotcha. No worries. I figured you were coming in good faith but wasn't certain. Who is pushing llm's for programming that hard? In my bubble, which often includes Lemmy, most people HATE them for all uses. I get that tech bros and linked in crazies probably push this tech for coding a lot, but outside of that, most devs I know IRL either are lukewarm or dislike llm's for dev work.