this post was submitted on 03 Mar 2025
761 points (99.4% liked)

Technology

63746 readers
3448 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 19 hours ago (2 children)

It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.

[–] [email protected] 16 points 17 hours ago* (last edited 17 hours ago) (2 children)

Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doesn't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.

"Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.

[–] [email protected] 14 points 17 hours ago

I strongly worry that humans really weren't ready for this "good enough" product to be their first "real" interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.

It's obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you're several comments deep into a genuinely actually no lie completely pointless debate against spooky math.

[–] [email protected] 2 points 14 hours ago

You can't ask it about itself because it has no internal model of self and is just basing any answer on data in its training set