this post was submitted on 17 Dec 2024
180 points (100.0% liked)
Programming
17665 readers
285 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
He's right that it's probably harder for AI to understand. But wrong in every other way possible. Human understanding should trump AI, at least while they're as unreliable as they currently are.
Maybe one day AI will know how to not bullshit, and everyone will use it, and then we'll start writing documentation specifically for AI. But that's a long way off.
Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don't use it for anything that's deterministic and has a correct answer. So, in that rwgard, we're basically at square 0.
You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can't be something that's also ML/random.
You can't confidently say that because nobody knows how to solve the bullshitting issue. It might end up being very similar to current LLMs.