this post was submitted on 18 Sep 2023
815 points (96.5% liked)
Technology
58303 readers
11 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Nobody complained about copyright when Microsoft had the only image ai in the game, only when the open source stable diffusion came out did they start screeching about how ai was "stealing their jobs".
Fuck off. The tech got popular and public got educated on what makes it work.
So years of Microsoft's advertising dalle did nothing to educate the public about how ai works but they're suddenly all experts the week after stable diffusion comes out?
No, they didn’t because I’ve literally never heard of it until your comment. And I understand that my experience is anecdotal, but I guarantee I’m not the only one, or even one of only a couple thousand. You severely overestimate how knowledgeable the general public is on AI. Most haven’t even heard of Chat GPT, and that’s in the news, let alone expecting everyone to be interested in it enough to actually educate themselves on it.
Like you’re the only person in this thread that’s even mentioned Microsoft’s version, yet you think “the public” knows about it?
Uh no people definitely did. Mostly the people that actually knew how this shit worked. But even laypeople complained when it was just Dall-E and Midjourney.
What are you talking about? When MS had the only image AI in the game, it was garbage and couldn't do anything useful. Of course no one was threatened.
But after researchers got their hands on nVidia 3000 series cards, and finally had access to hardware.
More advanced research papers started spilling out, which has caused this crazy leap in AI tech.
Now the image/audio AI is advanced enough to be useful, hence now the threats..
And yet it was still doing exactly the same thing that people are now going on about how "unethical" it is.
Just goes to show that they don't actually care how "unethical" it is until it actually poses a threat to their income. It's about money, not about principle.
This is such a ridiculous argument it’s not even funny. You have absolutely no evidence to back up your deranged claim. Take your victim complex somewhere else.
I saw people complain about it from 8:00 day one but go off I guess.