this post was submitted on 15 Sep 2024
895 points (98.3% liked)
Technology
58303 readers
16 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wow, the text generator that doesn't actually understand what it's "writing" is making mistakes? Who could have seen that coming?
I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.
I wish we could say the students will figure it out, but I've had interns ask for help and then I've watched them try to solve problems by repeatedly asking ChatGPT. It's the scariest thing - "Ok, let's try to think about this problem for a moment before we - ok, you're asking ChatGPT to think for a moment. FFS."
Altering the prompt will certainly give a different output, though. Ok, maybe "think about this problem for a moment" is a weird prompt; I see how it actually doesn't make much sense.
However, including something along the lines of "think through the problem step-by-step" in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of "reasoning", thereby arriving at an output that's more correct or of higher quality.
This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It "thinks" about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw "thinking" to the user.
Of course, it's unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.
And that's why I'm the one that fixes the PC when it breaks... because even good programmers may even consider the pc to be magicboxes if they've never turned a screwdriver in their life...