this post was submitted on 18 May 2025
89 points (95.9% liked)

Ask Lemmy

31698 readers
1535 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 1 hour ago

Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.

[–] [email protected] 7 points 48 minutes ago

I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

[–] [email protected] 6 points 54 minutes ago* (last edited 53 minutes ago)

More regulation, supervised development, laws limiting training data to be consensual.

[–] [email protected] 4 points 1 hour ago

2 chicks at the same time.

[–] [email protected] 22 points 2 hours ago

TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.

ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..

So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

[–] [email protected] 8 points 2 hours ago

I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.

I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

[–] [email protected] 5 points 1 hour ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[–] [email protected] 3 points 1 hour ago

Just Mass public hangings of tech Bros.

[–] [email protected] 14 points 2 hours ago

Rename it to LLMs, because that's that it is. When the hype label is gone, it won't get shoved into everywhere for shits and giggles and be used for stuff it's actually useful for.

[–] [email protected] 27 points 5 hours ago

Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I'd love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.

I don't love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I'm not sure it's enough to really matter. I think we need bigger "what if" scenarios to handle that.

[–] [email protected] 0 points 1 hour ago

Shut it off until they figure out how to use a reasonable amount of energy and develop serious rules around it

[–] [email protected] 29 points 6 hours ago (2 children)

They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

They are only allowed to use data that people opt into providing.

[–] [email protected] 8 points 4 hours ago (2 children)

There's no way that's even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.

[–] [email protected] 1 points 25 minutes ago

Its only not feasible because it would kill AIs.

Large models have to steal everything from everyone to be baseline viable

[–] [email protected] 5 points 2 hours ago

There's no way that's even feasible.

It's totally feasible, just very expensive.

Either copyright doesn't exist in its corny form or AI companies don't.

[–] [email protected] 4 points 5 hours ago (1 children)

What about models folks run at home?

[–] [email protected] 11 points 4 hours ago

Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.

[–] [email protected] 16 points 5 hours ago

There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.

We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.

For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.

There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

Here are further resources if you didn't get enough ranting.

lemmy.world's fuck_ai community

System Crash Podcast

Tech Won't Save Us Podcast

Better Offline Podcast

[–] [email protected] 9 points 4 hours ago (3 children)

I’d like for it to be forgotten, because it’s not AI.

[–] [email protected] 6 points 2 hours ago

It is. Just not agi

[–] [email protected] 5 points 2 hours ago

It's AI in so far as any ML is AI.

[–] [email protected] 4 points 3 hours ago

Thank you.

It has to come from the C suite to be "AI". Otherwise it's just sparkling ML.

[–] [email protected] 80 points 7 hours ago (6 children)

If we're going pie in the sky I would want to see any models built on work they didn't obtain permission for to be shut down.

Failing that, any models built on stolen work should be released to the public for free.

[–] [email protected] 29 points 7 hours ago

This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone's art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don't care if they learned it from 10,000 people, list them.

load more comments (5 replies)
[–] [email protected] 21 points 6 hours ago

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

[–] [email protected] 34 points 7 hours ago (4 children)

I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

[–] [email protected] 10 points 6 hours ago (1 children)

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Their creators can't even keep them from deliberately lying.

[–] [email protected] 4 points 5 hours ago
[–] [email protected] 11 points 7 hours ago

Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

load more comments (2 replies)
[–] [email protected] 40 points 8 hours ago (12 children)

I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That's what I want.

load more comments (12 replies)
[–] [email protected] 11 points 6 hours ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

[–] [email protected] 2 points 4 hours ago

License it's usage

[–] [email protected] 14 points 7 hours ago

Training data needs to be 100% traceable and licensed appropriately.

Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

Any model whose training includes data in the public domain should itself become public domain.

And while we're at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

[–] [email protected] 18 points 7 hours ago

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

[–] [email protected] 11 points 7 hours ago (2 children)

If we're talking realm of pure fantasy: destroy it.

I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.

But a lot of sci-fi doesn't really address the run up to AI, in fact a lot of it just kind of assumes there'll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.

Put it out of its misery.

load more comments (2 replies)
[–] [email protected] 8 points 7 hours ago (1 children)

I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

load more comments (1 replies)
load more comments
view more: next ›