this post was submitted on 22 Dec 2024
1470 points (97.6% liked)

Technology

60060 readers
3358 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

It's all made from our data, anyway, so it should be ours to use as we want

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 22 hours ago (1 children)

Delete them. Wipe their databases. Make the companies start from scratch with new, ethically acquired training data.

[–] [email protected] 3 points 20 hours ago (1 children)

Mmm yes so all that electricity is pure waste

[–] [email protected] 1 points 18 hours ago (2 children)

Genuine question, does anyone know how much of the electricity is used for training the model vs using it to generate responses?

load more comments (2 replies)
[–] [email protected] 2 points 22 hours ago* (last edited 22 hours ago)

Only if they were trained on public material.

[–] [email protected] 2 points 23 hours ago (1 children)

Are you threatening me with a good time?

First of all, whether these LLMs are "illegally trained" is still a matter before the courts. When an LLM is trained it doesn't literally copy the training data, so it's unclear whether copyright is even relevant.

Secondly, I don't think that making these models "public domain" would have the negative effects that people angry about AI think it would. When a company is running a closed model internally, like ChatGPT for example, the model is never available for download in the first place. It doesn't matter if it's public domain or not because you can't get a copy of it. When a company releases an open-weight model for public use, on the other hand, they usually encumber them with some sort of license that makes them harder for competitors to monetize or build on. Making those public-domain would greatly increase their utility. It might make future releases less likely, but in the meantime it'll greatly enhance AI development.

[–] [email protected] 2 points 22 hours ago (3 children)

The LLM does reproduce copyrighted data though.

[–] [email protected] 1 points 16 hours ago* (last edited 16 hours ago)

Not 1:1, overfitted images still have considerable differences to their original. If you chose "reproduce" to make that point, that's why OP clarified it wasn't literally copying training data, as the actual data being in the model would be a different story. Because these models are (in simplified form) a bunch of really complex math that produces material, it's a mathematical inevitability that it produces copyrighted material, even for calculations that weren't created due to overfitting. Just like infinite monkeys on infinite typewriters will eventually reproduce every piece of copyrighted text.

But then I would point you to the camera on your phone. If you take a copyrighted picture with that, you're still infringing. But was the camera created with the intention to appropriate material captured by the lens? Which is why we don't blame the camera for that, we blame the person that used it for that purpose. AI users have an ethical obligation not to steer the AI towards generating infringing material.

[–] [email protected] 3 points 22 hours ago
[–] [email protected] 1 points 20 hours ago

*it can produce data identical to data that has been copyrighted before

[–] [email protected] 1 points 21 hours ago
[–] [email protected] 1 points 22 hours ago

Doesn't seem like this helps out all the writers / artists that the LLM stole from.

load more comments
view more: ‹ prev next ›