this post was submitted on 18 May 2025
136 points (94.2% liked)

Ask Lemmy

31716 readers
1875 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 35 points 13 hours ago (4 children)

They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

They are only allowed to use data that people opt into providing.

[–] [email protected] 1 points 5 hours ago

This definitely relates to moral concerns. Are there other examples like this of a company that is allowed to profit off of other people’s content without paying or citing them?

[–] [email protected] 10 points 11 hours ago (3 children)

There's no way that's even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.

load more comments (3 replies)
[–] [email protected] 7 points 12 hours ago (2 children)

What about models folks run at home?

[–] [email protected] 1 points 5 hours ago

I think if you’re not making money off the model and its content, then you’re good.

[–] [email protected] 13 points 12 hours ago

Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.

load more comments (1 replies)
[–] [email protected] 4 points 8 hours ago

Just Mass public hangings of tech Bros.

[–] [email protected] 58 points 15 hours ago (17 children)

I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That's what I want.

load more comments (17 replies)
[–] [email protected] 18 points 12 hours ago (1 children)

There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.

We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.

For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.

There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

Here are further resources if you didn't get enough ranting.

lemmy.world's fuck_ai community

System Crash Podcast

Tech Won't Save Us Podcast

Better Offline Podcast

load more comments (1 replies)
[–] [email protected] 22 points 13 hours ago (1 children)

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

[–] [email protected] 1 points 5 hours ago

Unfortunately, right now the world is providing the greatest level of research for AI.

I feel like the only thing that the world universally bans is nuclear weapons. AI would have to become so dangerous that the world decides to leave it in the lab, but you can easily make an LLM at home. You can’t just make nuclear power in your room.

How do you get your wish?

[–] [email protected] 42 points 14 hours ago (4 children)

I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

[–] [email protected] 1 points 6 hours ago

This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.

How does it all get enforced? FTC? How does this become reality?

[–] [email protected] 13 points 13 hours ago (1 children)

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Their creators can't even keep them from deliberately lying.

[–] [email protected] 6 points 12 hours ago
[–] [email protected] 12 points 14 hours ago

Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

[–] [email protected] 6 points 13 hours ago (1 children)

... I want clear evidence that the LLM ... will never hallucinate or make something up.

Nothing else you listed matters: That one reduces to "Ban all Generative AI". Actually worse than that, it's "Ban all machine learning models".

[–] [email protected] 4 points 8 hours ago* (last edited 8 hours ago) (1 children)

If "they have to use good data and actually fact check what they say to people" kills "all machine leaning models" then it's a death they deserve.

The fact is that you can do the above, it's just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the "answer to everything machine."

load more comments (1 replies)
[–] [email protected] 0 points 4 hours ago

Ban it until the hard problem of consciousness is solved.

[–] [email protected] 10 points 11 hours ago (3 children)

I’d like for it to be forgotten, because it’s not AI.

[–] [email protected] 7 points 9 hours ago

It's AI in so far as any ML is AI.

[–] [email protected] 4 points 10 hours ago

Thank you.

It has to come from the C suite to be "AI". Otherwise it's just sparkling ML.

load more comments (1 replies)
[–] [email protected] 13 points 13 hours ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

[–] [email protected] 19 points 14 hours ago (2 children)

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

load more comments (2 replies)
[–] [email protected] 15 points 14 hours ago

Training data needs to be 100% traceable and licensed appropriately.

Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

Any model whose training includes data in the public domain should itself become public domain.

And while we're at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

[–] [email protected] 11 points 14 hours ago (1 children)

If we're talking realm of pure fantasy: destroy it.

I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.

But a lot of sci-fi doesn't really address the run up to AI, in fact a lot of it just kind of assumes there'll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.

Put it out of its misery.

[–] [email protected] 8 points 14 hours ago (1 children)

How do you "destroy it"? I mean, you can download an open source model to your computer right now in like five minutes. It's not Skynet, you can't just physically blow it up.

[–] [email protected] 8 points 12 hours ago

OP asked what people wanted to happen, and even later "destroy gen AI" as an option. I get it is not realistically feasible, but it's certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I'm sure they realize it's not realistic.

[–] [email protected] 9 points 14 hours ago (1 children)

I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

load more comments (1 replies)
[–] [email protected] 8 points 14 hours ago (1 children)

I want OpenAI to collapse.

load more comments (1 replies)
[–] [email protected] 8 points 14 hours ago (2 children)

I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me

[–] [email protected] 8 points 14 hours ago* (last edited 14 hours ago)

But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument.

The math around it is unimportant, frankly. The issue with AI isn't about GANN networks alone, it's about the licensing of the materials used to train a GANN and whether or not companies that used materials to train a GANN had proper ownership rights. Again, like the post I made, there's an easy argument to make that OpenAI and others never licensed the material they used to train the AI, making the whole model poisoned by copyright theft.

There's plenty of uses of GANNs that are not problematic. Bespoke solution for predicting the outcomes of certain equations or data science uses that involve rough predictions on publically sourced statistics (or privately owned.) The problem is that these are not the same uses that we call "AI" today -- and we're actually sleeping on much better uses of neural networks by focusing on a pie in the sky AGI nonsense being pushed by companies that are simply pushing highly malicious, copyright infringing products to make a quick buck on the stock market.

[–] [email protected] 6 points 13 hours ago (2 children)

It feels like we're being delivered the sort of stuff we'd consider flim-flam if a human did it, but lapping it up bevause the machine did it.

"Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!" If you hired a human who acted like that, we'd have them on an improvement plan in days and sacked in weeks.

load more comments (2 replies)
[–] [email protected] 7 points 13 hours ago

Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."

Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.

(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)

[–] [email protected] 8 points 14 hours ago (2 children)

AI models produced from copyrighted training data should need a license from the copyright holder to train using their data. This means most of the wild west land grab that is going on will not be legal. In general I'm not a huge fan of the current state of copyright at all, but that would put it on an even business footing with everything else.

I've got no idea how to fix the screeds of slop that is polluting search of all kinds now. These sorts of problems ( along the lines of email spam ) seem to be absurdly hard to fix outside of walled gardens.

[–] [email protected] 10 points 14 hours ago (3 children)

See, I'm troubled by that one because it sounds good on paper, but in practice that means that Google and Meta, who can certainly build licenses into their EULAs trivially, would become the only government-sanctioned entities who can train AI. Established corpos were actively lobbying for similar measures early on.

And of course good luck getting China to give a crap, which in that scenario would be a better outcome, maybe.

Like you, I think copyright is broken past all functionality at this point. I would very much welcome an entire reconceptualization of it to support not just specific AI regulation but regulation of big data, fair use and user generated content. We need a completely different framework at this point.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 7 points 14 hours ago (1 children)

I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).

I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

load more comments (1 replies)
[–] [email protected] 5 points 13 hours ago

I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people's work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn't be, but could be) forced to shut down or start over properly. But it's far too late now since it's everywhere there is a GPU running, even if models don't progress past current state.

That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there's commonality in regards to not being sure what's going on inside the box and if it's really doing what it's told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it's too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it's the easiest way to throw money at a problem and reap profits, which is all they care about.

[–] [email protected] 2 points 11 hours ago

License it's usage

load more comments
view more: ‹ prev next ›