this post was submitted on 19 Oct 2024
96 points (86.9% liked)
Showerthoughts
29786 readers
444 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. A showerthought should offer a unique perspective on an ordinary part of life.
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- Avoid politics
- 3.1) NEW RULE as of 5 Nov 2024, trying it out
- 3.2) Political posts often end up being circle jerks (not offering unique perspective) or enflaming (too much work for mods).
- 3.3) Try c/politicaldiscussion, volunteer as a mod here, or start your own community.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If AI is that good, it's not 'slop', is it? I see this argument all the time. Apparently AI is both awful slop, devoid of merit and also indistinguishable from human made content and a threat to us all. Pick a side.
It’s indistinguishable from human slop that’s for sure.
Well, not all LLMs are created equal. Some are decent, some are slop, some are nightmare machines
Sure, but there's never a qualifier in these arguments. It's just 'hur dur AI bad' which is lazy and disingenuous.
AI is generally bad because it tends to steal content from human creators and is largely being pushed because corporations want another excuse to throw more workers on the street in favor of machines (while simultaneously raising their prices).
There are some AI uses that are good though, such as AI voice generation to help those that can't speak to communicate with the world and not sound like a robot.
Again, this is an argument that I see a lot, that's simply not true. AI is not stealing anything. Theft is a specific legal term. If I steal your TV, I have your TV and you don't. If AI is trained on some content that content still exists. Whatever training takes place steals nothing.
Your point is a valid one, but this not unique to AI and is the inevitable result of the onward march of technology. The very thing we're using to communicate right now, the Internet, is responsible for billions of job losses. That's not a valid reason to get rid of it. Instead of blaming AI for putting people out work, we should be pressuring governments to implement things like UBI to provide people with a basic living wage. That way people need not fear the impact the advance of technology will have on their ability to feed and house themselves.
These are great examples.
Not to mention, there’s also a lot of human slop.
I trained a LLM on nothing but Hitler speeches and Nazi propaganda. then asked it to write a speech if Hitler was the 2016 president and you'll be shocked at the results.
That's the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn't it?
The argument being made is: "AI is currently slop but there is a reasonable expectation that it will be pushed until it is indistinguishable from human work, and therefore devaluing of human work."
I don't like AI because it's just another way that "corporate gonna corporate" and it never ends up working out for the mere mortals' benefit. Also, misinformation is already so prevalent and it's going to continue to get worse (we have seen this already--trump abuses it continually).
Again, if the work is 'indistinguishable' then I don't see how AI art 'devalues' human work any more than the work done by another human. This just sounds like old fashioned competition, which has existed as long as art itself has.
Corporations abusing technology to the disbenefit of people is nothing new, unfortunately, and isn't unique to AI (see Email, computers, clocking in machines, monitoring software etc). That speaks to a need for better corporate oversight and better worker rights.
This is a good point, but again AI is hardly the first time technology has been used to spread lies and misinformation. This highlights a fundamental problem with our media and a need to teach better critical thinking in schools etc.
They're all valid concerns but in my opinion they suggest AI is being used as an enabler, and not that the problems in question are the sole product of it. Sadly if we stopped using anything and everything that was misused for nefarious means we'd go back to the stone age.