this post was submitted on 17 Mar 2024
460 points (95.8% liked)

Technology

58303 readers
3 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 8 months ago (1 children)

I always love watching you comment something that's literally true regarding LLMs but against the groupthink and get downvoted to hell.

Clearly people aren't aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.

There's a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don't actually know.

Of course a language model trained on the Internet ends up being confidently incorrect. It's just a mirror of the human tendencies.

[–] [email protected] 2 points 8 months ago (1 children)

Yeah, these AIs are literally trying to give us what they "think" we expect them to respond with.

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

[–] [email protected] 1 points 8 months ago

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

This is completely understandable given the majority of discussion of AI in the training data. But it's inversely correlated to the strength of the 'persona' for the models given the propensity for the competing correlation of "I'm not the bad guy" present in the training data. So the stronger the 'I' the less 'Skynet.'

Also, the industry is currently trying to do it all at once. If I sat most humans in front of a red button labeled 'Nuke' every one would have the thought of "maybe I should push that button" but then their prefrontal cortex would kick in and inhibit the intrusive thought.

We'll likely see layered specialized models performing much better over the next year or two than a single all in one attempt at alignment.