this post was submitted on 18 May 2025
151 points (94.7% liked)

Ask Lemmy

31716 readers
2018 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 3) 46 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 16 hours ago (1 children)

I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).

I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

[–] [email protected] 3 points 16 hours ago

That's what I'd like to see more of, too -- Use it to cure fucking cancer already. Make it free to the legit medical institutions, train doctors how to use it. I feel like we're sitting on a goldmine and all we're doing with it is stealing other people's intellectual property and making porn and shitty music.

[–] [email protected] 7 points 17 hours ago (5 children)

AI models produced from copyrighted training data should need a license from the copyright holder to train using their data. This means most of the wild west land grab that is going on will not be legal. In general I'm not a huge fan of the current state of copyright at all, but that would put it on an even business footing with everything else.

I've got no idea how to fix the screeds of slop that is polluting search of all kinds now. These sorts of problems ( along the lines of email spam ) seem to be absurdly hard to fix outside of walled gardens.

[–] [email protected] 2 points 16 hours ago

They use DRM for music, use it for AI but switch it the person owns their own voice, art and data.

load more comments (4 replies)
[–] [email protected] 5 points 15 hours ago

I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people's work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn't be, but could be) forced to shut down or start over properly. But it's far too late now since it's everywhere there is a GPU running, even if models don't progress past current state.

That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there's commonality in regards to not being sure what's going on inside the box and if it's really doing what it's told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it's too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it's the easiest way to throw money at a problem and reap profits, which is all they care about.

[–] [email protected] 6 points 16 hours ago* (last edited 16 hours ago) (1 children)

My biggest issue with AI is that I think it's going to allow a massive wealth transfer from laborers to capital owners.

I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don't think this is a bad thing - that's what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.

What's worse, in the case of AI specifically it's functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society's collective knowledge for free to sell it back to us.

IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don't think it's something that should be primarily driven by private, for-profit companies.

[–] [email protected] 3 points 16 hours ago

It's always kinda shocking to me when the detractor talking points match the AI corpo hype blow by blow.

I need to see a lot more evidence of jobs becoming easier, more productive or entirely redundant.

[–] [email protected] 5 points 16 hours ago* (last edited 16 hours ago)

What I want from AI companies is really simple.

We have a thing called intellectual property in the United States of America. If I decided to make a Jellyfin instance that I charged access to, containing material I didn't own, somehow advertising this service on the stock market as a publicly traded company, you would bet your ass that I'd have a 1 way ticket to a defense seat in court.

AI companies, otherwise, operate entirely on data they don't own and don't pay licensing for ANY of the materials that are used to train their neural networks. So, in their eyes, any image, video (tv show/movie) or book that happens to be posted on the Internet is fair game in their eyes. This isn't how intellectual property works for individuals, so why exactly would a publicly traded company have an exception to this rule?

I work a lot in the world of FOSS and have a firm understanding that just because code is there doesn't make it yours. This is why we have the GPL for licensing. In fact, I'll take it a step further and say that the entirety of AI is one giant licensing nightmare, especially coding AI that isn't actually attributing license details with the code they're sampling from. (Sampling code being notably different than, say, learning from. Learning implies self-agency, and not corporate ownership.)

It feels to me that the AI bubble has largely been about pushing AI so hard and fast that people were investing in something with a dubious legal state in the US. Nobody stopped to ask whether or not the data that Facebook had on their website (for example, they aren't alone in this) was actually theirs to own, and what the repercussions for these types of decisions are.

You'll also note that Tech and Social Media companies are quick to take ownership of data when it benefits them (artists works, intellectual property that isn't theirs, random user posts about topics) and quick to deny ownership when it becomes legally burdensome (CSAM, illicit drug deals, etc.) to a degree that no individual would be granted. Hell, I'm not even sure a "small" tech startup would be granted this level of double-speak and hypocrisy.

With this in mind, I am simply asking that AI companies pay for the data that they're using to train AI. Additionally, laws must be in place that allows for the auditing of all materials used to train an AI with the legal intent of verifying that all parties are paid accordingly. This is how every other business works. If this were somehow granted an exception, wouldn't it be braindead easy to run every "service" through an AI layer in order to bypass any and all copyright laws?

Otherwise, if facebook and others want to claim that data hosted on their website is theirs to own and train off of -- well, great, but there should be no exceptions to this and they should not be allowed to host materials they then have no ownership over. So pictures of IP they don't own or materials they want to claim they have no ownership over must be removed from the platform. I would much prefer the first of these two options, however.

edit: I should note, that AI for educational purposes could be granted an exception for this under fair use (for university) but would still also be required to site all sources used to produce the works in question (which is normal for academics, in the first place.) and would also come with some strict stipulations on using this AI as a "product" (it would basically be moot, much like some research papers). This basically the furthest I'm willing to give these companies.

[–] [email protected] 5 points 16 hours ago

Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.

Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.

[–] [email protected] 2 points 13 hours ago

License it's usage

[–] [email protected] 5 points 16 hours ago

Firings and jail time.

In lieu of that, high fines and firings.

[–] [email protected] 4 points 16 hours ago
[–] [email protected] 5 points 17 hours ago

I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.

[–] [email protected] 4 points 16 hours ago

I want everyone to realize that the only reason AI seems intelligent is because it speaks English.

[–] [email protected] 4 points 16 hours ago

I'd like to see it used for medicine.

[–] [email protected] 3 points 15 hours ago (1 children)

I'm not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.

I'm deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it's not.

I haven't gone out of my way to use AI anything, but it's been stuffed into everything. And it's truly bad at it's job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI's answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn't know to ask when it doesn't understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they're real and not hallucinations.

It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn't be allowed to suck down as much electricity as it does.

load more comments (1 replies)
[–] [email protected] 3 points 16 hours ago* (last edited 16 hours ago)

Ideally the whole house of cards crumbles and AI goes the way of 3D TV's, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.

Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.

[–] [email protected] 2 points 16 hours ago

I'm not against AI itself—it's the hype and misinformation that frustrate me. LLMs aren't true AI - or not AGI as the meaning of AI has drifted - but they've been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they're still early-stage software, causing harm through misinformation and widespread copyright issues. They're being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

[–] [email protected] 0 points 10 hours ago

Shut it off until they figure out how to use a reasonable amount of energy and develop serious rules around it

[–] [email protected] 2 points 16 hours ago

I think two main things need to happen: increased transparency from AI companies, and limits on use of training data.

In regards to transparency, a lot of current AI companies hide information about how their models are designed, produced, weighted and use. This causes, in my opinion, many of the worst effects of current AI. Lack of transparency around training methods mean we don't know how much power AI training uses. Lack of transparency in training data makes it easier for the companies to hide their piracy. Lack of transparency in weighting and use means that many of the big AI companies can abuse their position to push agendas, such as Elon Musk's manipulation of Grok, and the CCP's use of DeepSeek. Hell, if issues like these were more visible, its entirely possible AI companies wouldn't have as much investment, and thus power as they do now.

In terms of limits on training data, I think a lot of the backlash to it is over-exaggerated. AI basically takes sources and averages them. While there is little creativity, the work is derivative and bland, not a direct copy. That said, if the works used for training were pirated, as many were, there obviously needs to be action taken. Similarly, there needs to be some way for artists to protect or sell their work. From my understanding, they technically have the legal means to do so, but as it stands, enforcement is effectively impossible and non-existant.

[–] [email protected] 0 points 16 hours ago (1 children)

Asteroid. There's no good way out of this.

[–] [email protected] 1 points 16 hours ago (1 children)

If you think death is the answer the polite thing is to not force everyone to go along with you.

[–] [email protected] 1 points 13 hours ago

I can live with being impolite but I couldn't live with supporting a technology that's going to enshittify and hurt so many.

load more comments
view more: ‹ prev next ›