this post was submitted on 22 Aug 2023
765 points (95.7% liked)

Technology

60123 readers
2810 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

The analogy talks about mixing samples of music together to make new music, but that's not what is happening in real life.

The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.

[–] [email protected] 5 points 1 year ago (1 children)

"Learn" is debatable in this usage. It is trained on data and the model creates a set of values that you can apply that produce an output similar to human speach. It's just doing math though. It's not like a human learns. It doesn't care about context or meaning or anything else.

[–] [email protected] 0 points 1 year ago

Okay, but in the context of this conversation about copyright I don't think the learning part is as important as the reproduction part.