this post was submitted on 27 Jan 2024
123 points (90.7% liked)
Arch Linux
7841 readers
11 users here now
The beloved lightweight distro
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have had some luck asking it follow-up questions to explain what each line does. LLMs are decent at that and might even discover bugs.
You could also copy the conversation and paste it to another instance. It is much easier to critique than to come up with something, and this holds true for AI as well, so the other instance can give feedback like "I would have suggested x" or "be careful with commands like y"
This feels like a lot of hoops to avoid reading a wiki page thoroughly But if you want to use gpt this may work
I’ve also tried that, but with mixed results. Generally speaking, GPT is too proud to admit its mistakes. Occasionally I’ve managed successfully point out a mistake, but usually it just thinks I’m trying to gaslight it.
Asking follow up questions works really well as long as you avoid turning it into a debate. When I notice that GPT is contradicting itself, I just keep that information to myself and make a mental note about not trusting it. Trying to argue with someone like GPT is usually just an exercise in futility.
When you have some background knowledge in the topic you’re discussing, you can usually tell when GPT is going totally off the rails. However, you can’t dive into every topic out there, so using GPT as a shortcut is very tempting. That’s when you end up playing with fire, because you can’t really tell if GPT is pulling random nonsense out of its ass or if what it’s saying is actually based on something real.