this post was submitted on 22 Jul 2023
169 points (92.0% liked)
Technology
58303 readers
8 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm conflicted on a lot of this. At the end of the day it seems like these LLMs are simulating human behavior to an extent - exposure to content and generating similar content from that. Could Sarah Silverman be sued by comedians who influenced her comedy style and routines? generally no. I do understand the risk with letting these 'AI' run rampant to displace a huge portion of the creative space which is bad but where should the line be drawn? Is it only the fact they were trained material they dont own people are challenging? What recourse will they have when a LLM is trained on wholly owned IP?
She’s suing for copyright infringement, basically, not the LLM emulating her style.
The LLMs read books from her and many, many others that they didn’t buy, because unauthorized copies had been uploaded to the web (happens to every popular book).
Honestly, I don’t know if she has a case. Going after the people who illegally uploaded her book would be the proper route, but that’s always nearly impossible.
Long and short, LLMs benefited from illegal copies.
I see a lot of people claim the training model included copyrighted works particularly books because it can provide a summary of it. But it can provide a summary of visual media too, and no one is claiming it’s sitting there watching films.
If the argument is it has quite a detailed knowledge of the book, that’s not convincing either. All it needs is a summary and it can make up the blanks, and get it close enough we can’t tell the difference. Nothing is original.
Your example is faulty. If you upload an illegal copy of a book and I read it then tell people all about it, I am not committing copyright infringement
How did you read it?
Did you access it where it was illegally posted online?
And in so doing, copy it locally in order to read it?
Guess what? According to copyright laws in the US, you just committed copyright infringement.
There's two separate claims.
One, that training is infringement, will hopefully be found to be without merit or it's a slippery slope to the death of free use.
The other, that OpenAI committed copyright infringement by downloading pirated books, is not special in any way with the AI stuff. It doesn't matter how they used it. If they can be found to have downloaded it - even if they then never even opened the file - they are liable for civil damages that can be as high as $150,000 per work if they knew in advance that they were pirating it, and not less than $100 per work no matter if they knew or not.
This is the result of years of lobbying by the various digital rights owners over the past few decades. It's a very broad scope of law and OpenAI should rightfully be concerned if they didn't actually purchase the copyrighted material they used to train.
You can learn and share the knowledge from a book I might illegally upload, but if you are caught having made a copy of the pirated textbook I uploaded, you are liable for damages completely separate from what you did with the knowledge from the books.