this post was submitted on 11 Sep 2024
71 points (76.7% liked)
Programming
17483 readers
199 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I've been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to "Analyze and comment" my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.
They've been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they're becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:
explode
function? "Sorry, can't comment on texts alluding to dangerous practices such as involving explosives", I mean, WHAT?!?!)As you see, I tried almost all of them. In summary, while it's good to have such tools, they should never replace human intelligence... Or, at least, they shouldn't...
Problem is, dev companies generally focus on "efficiency" over "efficacy", wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it'll become XGH programming (Extreme Go Horse). And machines can't do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).
That's not a problem, nor Claude's main problem.
Claude's main problem is that it is frequently down, unreliable, and extremely buggy. Overall I think it might be better than ChatGPT and Copilot, but it's simply so unstable it becomes unusable.