this post was submitted on 27 May 2025
519 points (98.3% liked)
Technology
70395 readers
4411 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If abiding to the law destroys your business then you are a criminal. Simple as.
But the law is largely the reverse. It only denies use of copyright works in certain ways. Using things "without permission" forms the bedrock on which artistic expression and free speech are built upon.
AI training isn’t only for mega-corporations. Setting up barriers like these only benefit the ultra-wealthy and will end with corporations gaining a monopoly of a public technology by making it prohibitively expensive and cumbersome for regular folks. What the people writing this article want would mean the end of open access to competitive, corporate-independent tools and would jeopardize research, reviews, reverse engineering, and even indexing information. They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich.
I recommend reading this article by Kit Walsh, and this one by Tory Noble staff attorneys at the EFF, this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, and these two by Cory Doctorow.
Ok, but is training an AI so it can plagiarize, often verbatim or with extreme visual accuracy, fair use? I see the 2 first articles argue that it is, but they don't mention the many cases where the crawlers and scrappers ignored rules set up to tell them to piss off. That would certainly invalidate several cases of fair use
Instead of charging for everything they scrap, law should force them to release all their data and training sets for free. "But they spent money and time and resources!" So did everyone who created the stuff they're using for their training, so they can fuck off.
The article by Tory also says these things:
I'd wager 99.9% of the art and content created by AI could go straight to the trashcan and nobody would miss it. Comparing AI to the internet is like comparing writing to doing drugs.
You can plagiarize with a computer with copy & paste too. That doesn't change the fact that computers have legitimate non-infringing use cases.
I agree
But 99.9% of the internet is stuff that no one would miss. Things don't have to have value to you to be worth having around. That trash could serve as inspiration for your 0.1% of people or garner feedback for people to improve.
The apparent main use for AI thus far is spam and scam, which is what I was thinking about when dismissing most content made with that. While the internet was already chock full of that before AI, its availability is increasing those problems tenfold
Yes, people use it for other things, like "art", but most people using it for "art" are trying to get a quick buck ASAP before customers get too smart to fall for it. Writers already had a hard time getting around, now they have to deal with a never ending deluge of AI books, plus the risk of a legally distinct enough copy of their work showing up the next day.
Put it another way, the major use of AI thus far is "i want to make money without effort"
It definitely seems that way depending on what media you choose to consume. You should try to balance the doomer scroll with actual research and open source news.
I'm basing it mostly from personal and family experience. My mom often ends up watching AI made videos (stuff that's just an AI narrator and AI images slideshow), my RPG group has poked fun at the amount of AI books that Amazon keeps suggesting them, anyone using instagram will, sooner or later, see adverts of famous people endorsing bogus products or sites via the magic of AI
So you don't interact with AI stuff outside of that? Have you seen any cool research papers or messed with any local models recently? Getting a bit of experience with the stuff can help you better inform people and see through the more bogus headlines.
I don't really disagree with your other two points, but
They sure do, of which that is not one. That's de facto copyright infringement or plagiarism. Especially if you then turn around and sell that product.
The key point that is being made is that it you are doing de facto copyright infringement of plagiarism by creating a copy, it shouldn't matter whether that copy was made though copy paste, re-compressing the same image, or by using AI model. The product being the copy paste operation, the image editor or the AI model here, not the (copyrighted) image itself. You can still sell computers with copy paste (despite some attempts from large copyright holders with DRM), and you can still sell image editors.
However, unlike copy paste and the image editor, the AI model could memorize and emit training data, without the input data implying the copyrighted work. (exclude the case where the image was provided itself, or a highly detailed description describing the work was provided, as in this case it would clearly be the user that is at fault, and intending for this to happen)
At the same time, it should be noted that exact replication of training data isn't exactly desirable in any case, and online services for image generation could include a image similarity check against training data, and many probably do this already.