this post was submitted on 28 Mar 2025
22 points (89.3% liked)
Programming
19354 readers
76 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I use an LLM to do stuff that I find tedious and can easily verify to be correct (e.g. creating arguments for a script using argparse), or turning something from a table in a PDF into a Python list. My experiments trying to get any level of reliability for more complex tasks have been infuriating failures. They invent parameters and functionality that doesn't exist, swear blind that something is true but can't provide accurate references (or provide references that directly contradict what they just said), and so on.