this post was submitted on 10 Jul 2023
88 points (100.0% liked)

Technology

104 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 1 year ago

Don't threaten me with a good time.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (4 children)

If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn't seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don't believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I'd love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

Edit: I asked GPT-4 what it thought about this, and here is what it said:

As an AI developed by OpenAI, I don't access any copyrighted databases, subscription sites, confidential information, or proprietary databases for my learning or response generation. I was trained on a diverse range of internet text, which includes publicly available data in multiple languages and formats. The training also involves learning from large-scale datasets that contain parts of many kinds of texts (books, websites, scientific articles, etc.). However, I don't know specifics about which documents were in my training set or have access to any specific documents or sources.

Your question seems to pertain to how copyright law applies to AI. This is a complex and evolving area of law and policy, largely because AI systems like me do not "read" or "use" material in the same way a human would. A human might read a book and then consciously recall information from it when writing or speaking. In contrast, AI models do not have consciousness or awareness, they analyze statistical patterns in the data they were trained on and use these patterns to generate responses.

When a human uses copyrighted material to create something new, the law often considers this to be a derivative work. It is typically necessary to obtain permission from the copyright holder to create derivative works. However, AI doesn't use or recall copyrighted material directly; instead, it learns patterns from a huge amount of text data to predict or generate text based on a given prompt.

Dissemination of copyrighted material, such as through forum posts or Q&A platforms, without proper licensing or within the bounds of fair use, can be a violation of copyright law. AI doesn't 'disseminate' copyrighted material; it generates responses based on patterns it has learned.

That said, the legal interpretation of how copyright law applies to machine learning and AI is still evolving, and there can be a lot of complexities and nuances. Therefore, the information here should not be taken as legal advice.

As of my last update in September 2021, this was the case. However, I recommend checking the most recent legal standpoints and opinions, as this area is still evolving rapidly.

[–] [email protected] 8 points 1 year ago (1 children)

LLMs are not book reports. They are not synthesizing information. They're just pulling words based on probability distributions.Those probability distributions are based entirely on what training data has been fed into them.

You can see what this really means in action when you call on them to spit out paragraphs on topics they haven't ingested enough sources for. Their distributions are sparse, and they'll spit out entire chunks of text that are pulled directly from those sources, without citation.

If you write a book report that just reprinted significant swaths of the book, that would be plaigerism, and yes, would 100% be called copyright infringement.

Importantly, though, the copyright infringement for these models does not come at the point where it spits out passages from a copyrighted work. It occurs at the point where the work is copied and used for purposes that fall outside what the work is licensed for. And most people have not licensed their words for billion dollar companies to use them in for-profit products.

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (2 children)

@chemical_cutthroat

If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright?

The first conceptual mistake in this analogy is assuming the LLM entity is "writing". A person or a sentient being writing is still showing signs of intellectual work, which is how the example book report and movie review will not be accused of plagiarism, which is very very basically stealing someone's output but one that is not made legally ownership of (which then brings it to copyright infringement territory).

LLMs are producing text based on statistical probability meaning it is quite literally aping/replicating the aesthetic form of a known genre of textual output, which in these cases are given the legal status of intellectual property. So yes, an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre. It's the same way YouTube video essays get taken down if it's just a collection of movie clips that might sound like a full dialogue. Of course in that example yt clip, if you can argue it's a creative output where an artist is forming a new piece out of a collage of previous media, the rights owner to those movie clips might lose their claim to the said video. You can't make that defence with OpenAI.

@stopthatgirl7

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago (8 children)

I am very open to discussion on this, and if anyone has a counter-argument, I'd love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

That's very cool and all but while we have this debate there are artists getting ripped off.

load more comments (8 replies)
[–] [email protected] 2 points 1 year ago (1 children)

It is an area that will require us to think carefully of the ethics of the situation. Humans create works for humans. Has this really changed? Now consumption happens through a machine learning interface. I agree with your reasoning, but we have an elephant in the room that this line of reasoning does not address.

When we ask the AI system to generate content in someone else's style or when the AI distorts someone's view in its responses. It is in this area where things get very murky for me. Can I get an AI to eventually write another book in Terry Pratchett's style? Would his estate be entitled to some form of compensation? And that is an easier one compared to living authors or writers. We already see the way image generating AI programs copy artists. Now we are getting the same for language and more.

It will certainly be an interesting space to follow in the next few years as we develop new ethics around this.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (2 children)

@mack123

Can I get an AI to eventually write another book in Terry Pratchett's style? Would his estate be entitled to some form of compensation?

No, that's fair use under parody. Weird Al isn't compensating other artists for parody, so the creators of OpenAI shouldn't either, just because their bot can make something that sounds like Pratchett or anyone else. I wrote a short story a while back that my friend said sounded like if Douglas Adams wrote dystopian fiction. Do I owe the Adams' estate if I were to publish it? The same goes for photography and art. If I take a picture of a pastel wall that happens to have an awkward person standing in front of it, do I owe Wes Anderson compensation? This is how we have to look at it. What's good for the goose must be good for the gander. I can't justify punishing AI research and learning for doing the same things that humans already do.

load more comments (2 replies)
[–] [email protected] 6 points 1 year ago

As an european, I'm ok with that. If they close, then I'll stick to using local LLMs then.

[–] [email protected] 6 points 1 year ago

Copyright enforcement is for thee not for me.

[–] [email protected] 3 points 1 year ago
[–] [email protected] 3 points 1 year ago

You can read the actual proposal here - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

The stuff in the article isn't a problem IMO, but the main issue is the huge amount of bureaucracy for smaller companies and initiatives.

Almost everything counts as "AI" :

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (4 children)

The EU's stance is concerning. Their coming laws would benefit unlawful AI devs backed by dictatorships. (Edit: They'll do whatever they want to research and build more powerful AIs while devs in EU struggle due to heavy restrictions.) Currently, big techs are still learning how to build strong AIs, and giving dictatorships huge advantage like this is dangerous.

load more comments (4 replies)
[–] [email protected] 3 points 1 year ago (1 children)

Read the whole thing. The reason OpenAI is opposing the law is not necessarily copyright infringement.

One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”)

This is the more likely problem.

[–] [email protected] 2 points 1 year ago (1 children)

Given their name is "OpenAI" and they were founded on the idea of being transparent with those exact things, I'm less impressed that that's what they're upset about. The keep saying they're "protecting" us by not releasing us, which just isn't true. They're protecting their profits and valuation.

load more comments (1 replies)
load more comments
view more: next ›