this post was submitted on 22 Sep 2023
91 points (98.9% liked)
Technology
37800 readers
87 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If the image generator in question generated a piece of content that looks similar enough to some piece of content protected by copyright, odds are that it has been trained with it, or that both were produced with the same set of pieces of content.
That shows that the issue might be elsewhere - it isn't the output of the image generators, but the works being fed into the generator, when "training" them. (IMO the word "training" is rather misleading here.)
Refer to the link in the OP, regarding photography. All this discussion about "intent" (whatever this means) and authorship has been already addressed by legal systems, a long time ago.
What does "meaningfully" mean in this context? It's common for people using Stable Diffusion and similar models to create a bunch of images and trash most of them away; or to pick the output and "fix" it by hand. I'd argue that both are already meaningful enough.
If anything, this only highlights how the copyright laws were already broken, even before image gen...