this post was submitted on 25 Jun 2023
95 points (100.0% liked)

Technology

37801 readers
239 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

This is just one action in a coming conflict. It will be interesting to see how this shakes out. Does the record industry win and digital likenesses become outlawed, even taboo? Or does voice, appearance etc just become another sets of rights that musicians will have to negotiate during a record deal?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 2 years ago (2 children)

A lot of the AI stuff is a Pandora's box situation. The box is already open, there's no closing it back. AI art, AI music, and AI movies will become increasingly high quality and widespread.

The biggest thing we still have a chance to influence with it is whether it's something that individuals have access to or if it becomes another field dominated by the same tech giants that already own everything. An example is people being against stable diffusion because it's trained by individuals on internet images, but then being ok with a company like Adobe doing it because they snuck a line into their ToS that they can train AI off of anything uploaded to their creative cloud.

[–] [email protected] 10 points 2 years ago* (last edited 2 years ago) (1 children)

whether it’s something that individuals have access to

No we don't. That's the box being opened.

Here's a leaked google internal memo telling them as such: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

tl;dr: The open source community has accomplished more in a month of Meta's AI weights being released than everything we have, and shows no signs of slowing down. We have no secret sauce, no way to prevent anyone from setting up their own, and the opensource community already has almost-GPT equivalents running on old laptops and they're targeting the model running directly on the phone, making our expensive single ai solutions entirely obsolete.

Edit:

In addition, these corporations only have AI in the first place by stealing/scraping data from regular people and the open source community. Individuals should not feel obligated to honor any rule or directive that these technologies be owned and operated by only big players.

[–] [email protected] 3 points 2 years ago

The only advantage corporations could have had came from having the money to throw at extremely high quality training data. The fact that they cheaped out and just used whatever they could find on the internet (or paid a vendor, who just used AI to generate AI training data) has definitely contributed to the lack of any differentiating advantage.

[–] [email protected] 0 points 2 years ago (1 children)

Saying that Stable Diffusion was trained by "individuals" is a bit of a stretch, it cost over half a million dollars worth of compute to train it, and Stability AI is still a company in the end of the day. If that still counts as trained by individuals, then so does Midjourney and Dalle

[–] [email protected] 1 points 2 years ago

Original stable diffusion wasn't trained by individuals, but clearly the current progression of the software is largely community driven. All sorts of new tech and add-ons for it, huge volumes of community trained checkpoints and Lora's, and of course the interfaces themselves like automatic1111 and vladmatic.

And it's something you can run yourself offline with a halfway decent graphics card.