this post was submitted on 04 Aug 2024
338 points (98.8% liked)
Programming
17477 readers
238 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That and LLM confidently making up "facts". And since LLM is the AI with most direct exposure to the user, this is what happens.
I believe there’s use of LLMs beyond being “fact bots”. I see it more as a “universal text processor”. Like you already have a text, and you want to have it written in a different style or language. Or extract pieces of information from a text to something machine readable. Or maybe convert instructions in natural language to machine instructions.
All the facts are at hand. It just converts the given information to something else.
At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.
My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.
It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.
I wouldn't [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don't care about them.
Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.
I’m imagining it to be quite limited. Mostly to talk with appliances in a way that’s more advanced than today. Instructions like “gradually dim down the lights in living room until bed time”, or “dim down the lights in the living room when the we watch a movie on TV”.
we had a plenty of more deterministic tools for parsing human readable text to machine-readable long before llms