this post was submitted on 18 May 2025
158 points (94.9% liked)

Ask Lemmy

31716 readers
2408 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 hour ago

Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it's ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I'm neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.

I was called alarmist, that such a thing was a long way away and we didn't need "socialism" in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.

Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don't have a safety net in place.

Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that's fucking awesome. That's what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.

I think there's a meme floating around that really sums it up for me. Paraphrasing, but it goes "I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.".

I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.

Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don't have a job anymore if demand doesn't increase proportionally.

I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we've already seen the chaos that unfettered AI can cause to entire industries. It's a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.

[–] [email protected] 1 points 1 hour ago

I think its important to figure out what you mean by AI?

Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

[–] RandomVideos 1 points 1 hour ago* (last edited 1 hour ago)

It would be amazing if chat and text generation suddenly disappeared, but thats not going to happen

It would be cool to make it illegal to not mark AI generated images or text and not have them forced to be seen

[–] [email protected] 6 points 2 hours ago

My issue is that the c-levels and executives see it as a way of eliminating one if their biggest costs - labour.

They want their educated labour reduced by three quarters. They want me doing the jobs of 4 people with the help of AI, and they want to pay me less than they already are.

What I would like is a universal basic income paid for by taxing the shit out of the rich.

[–] [email protected] 14 points 4 hours ago

I'm not against it as a technology. I use it for my personal use, as a toy, to have some fun or to whatever.

But what I despise is the forced introduction everything. AI written articles and AI forced assistants in many unrelated apps. That's what I want to disappear, how they force in lots of places.

[–] [email protected] 6 points 3 hours ago

Not destroying but being real about it.

It's flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM "it's for the greater good". That's bullshit.

[–] [email protected] 0 points 1 hour ago* (last edited 1 hour ago)

Our current 'AI' is not AI. It is not.

It is a corporate entity to shirk labor costs and lie to the public.

It is an algorithm designed to lie and the shills who made it are soulless liars, too.

It only exists for corporations and people to cut corners and think they did it right because of the lies.

And again, it is NOT artificial intelligence by the standard I hold to myself.

And it pisses me off to no fucking end.

I personally would love an AI personal assistant that wasn't tied to a corporation listening to every fkin thing I say or do. I would absolutely love it.

I'm a huge Sci-Fi fan, so sure I fear it to a degree. But, if I'm being honest, AI would be amazing if it could analyze how I learned math wrong as a kid and provide ways to fix it. It would be amazing if it could help me routinely create schedules for exercise and food and grocery lists with steps to cook and how all of those combine to effect my body. It would be fantastic if it could point me to novels and have a critical debate about the inner works with a setting of being a contrarian or not so I can seek to deeply understand the novels.

It sounds like what our current state of AI has right? No. The current state is a lying machine. It cannot have critical thought. Sure, it can give me a schedule of food/exercise, but it might tell me I need to lift 400lbs and eat a thousand turkeys to meet a goal of being 0.02grams heavy. It might tell me 5+7 equals 547,032.

It doesn't know what the fuck it's talking about!

Like, ultimately, I want a machine friend who pushes me to better myself and helps me understand my own shortcomings.

I don't want a lying brick bullshit machine that gives me all the answers but they are all wrong because it's just a guesswork framework full of 'whats the next best word?'

Edit: and don't even get me fucking started on the shady practices of stealing art. Those bastards trained it on people's hard work and are selling it as their own. And it can't even do it right, yet people are still buying it and using it at every turn. I don't want to see another shitty doodle with 8 fingers and overly contrasted bullshit in an ad or in a video game. I don't want to ever hear that fucking computer voice on YouTube again. I stopped using shortform videos because of how fucking annoying that voice is. It's low effort nonsense and infuriates the hell out of me.

[–] [email protected] 12 points 5 hours ago* (last edited 5 hours ago) (4 children)

First of all stop calling it AI. It is just large language models for the most part.

Second: immediate carbon tax in line with the current damage expectations for emissions on the energy consumption of datacenters. That would be around 400$/tCO2 iirc.

Third: Make it obligatory by law to provide disclaimers about what it is actually doing. So if someone asks "is my partner cheating on me". The first message should be "this tool does not understand what is real and what is false. It has no actual knowledge of anything, in particular not of your personal situation. This tool just puts words together that seem more likely to belong together. It cannot give any personal advice and cannot be used for any knowledge gain. This tool is solely to be used for entertainment purposes. If you use the answers of this tool in any dangerous way, such as for designing machinery, operating machinery, financial decisions or similar you are liable for it yourself."

[–] [email protected] 5 points 4 hours ago

It absolutely can be used for knowledge gain it just depends what you are trying to learn, for example they excel at teaching languages. I speak 3 languages, my mother tongue Persian, English for business/most things and Spanish because of where I live now. Using an LLM I've been teaching myself French was easier than Duolingo was ever able to do.

[–] [email protected] 5 points 4 hours ago* (last edited 4 hours ago)

Agreed LLMs for mass consumption should come with some disclaimer

[–] [email protected] 1 points 3 hours ago

"This tool is exclusively built to respond to your chats how a person would. this includes claiming it knows things reguardless of it actually does. it's knolage is limited to it's 'training' process' "

[–] [email protected] 0 points 3 hours ago (1 children)

First of all stop calling it AI. It is just large language models for the most part.

Leave it to the anti-AI people to show their misunderstandings fast and early. LLMs are AIs, they're not general AIs

[–] RandomVideos 1 points 1 hour ago

The path finding system of most games with enemies are also AI. Its a generic term

When someone thinks of the word "AI", they probably think of a sentient computer, not a lot of math. Its causing confusion on how it works and what it can do

[–] [email protected] 12 points 5 hours ago* (last edited 5 hours ago)

For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you're lucky, opt out. And also, stop burning down the planet with data center power and water usage. That's all.

Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

Edit 2: And the tech press should be ashamed for how much they've been fawning over these slop generators. They gladly parrot press releases, claim it's the next big thing, and generally just suckle at the teet of AI companies.

[–] [email protected] 1 points 2 hours ago

I'm beyond the idea that there could or would be any worldwide movement against Ai, or much of anything if we're comparing healthcare, welfare and education reform. People are tuned out and numb.

[–] [email protected] 18 points 5 hours ago

I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we're not already there), we DONT need machines to mass produce slop.

[–] [email protected] 19 points 6 hours ago* (last edited 5 hours ago) (1 children)

What do I really want?

Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.

[–] [email protected] 5 points 6 hours ago

I think it's just a matter of time before it starts being removed from places where it just isn't useful. For now companies are just throwing it at everything to see what sticks. WhatsApp and JustEat added AI features and I have no idea why or how it could be used for those services and I can't imagine people using them.

[–] [email protected] 3 points 4 hours ago

Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.

I also think that AI is another step into government spying in a more efficient manner.

Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)

Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.

They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.

[–] [email protected] 8 points 5 hours ago

Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.

[–] [email protected] 10 points 6 hours ago (1 children)

Destroy capitalism. That's the issue here. All AI fears stem from that.

[–] [email protected] 6 points 5 hours ago (1 children)
  • Trained on stolen ideas: ✅
  • replacing humans who have little to no safety net while enriching an owner class: ✅
  • disregard for resource allocation, use, and pollution in the pursuit of profit: ✅
  • being forced into everything as to become unavoidable and foster dependence: ✅

Hey wow look at that, capitalism is the fucking problem again!

God we are such pathetic gamblemonkeys, we cannot get it together.

[–] [email protected] 2 points 4 hours ago (1 children)

There is no such thing as stolen ideas

[–] [email protected] 0 points 4 hours ago

Maybe, maybe not, but call it stolen creations instead, point still valid

[–] [email protected] 10 points 7 hours ago

(Ignoring all the stolen work to train the models for a minute)

It's got its uses and potential, things like translations, writing prompts, or a research tool.

But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.

The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.

Lastly the AI relationships are not good.

[–] [email protected] 7 points 6 hours ago

Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let's stop making up fantasy's about AI it's bullshit in truth.

[–] [email protected] 13 points 8 hours ago

Make AIs OpenSource by law.

[–] [email protected] 9 points 7 hours ago

The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.

[–] [email protected] 18 points 9 hours ago (1 children)

Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.

If AI is here to stay, we must regulate what slice of the planet we're willing to give it. I mean, AI is cool and all, and it's been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn't really worth the energy consumption that it currently requires.

A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can't be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI's datacenter, Amazon and Google's investments into nuclear). It's going to get worse. They'll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.

We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We're thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.

[–] [email protected] 2 points 4 hours ago

These companies investing in nuclear is the only good thing about it. Nuclear power is our best, cleanest option to supplement renewables like solar and wind, and it has the ability to pick up the slack when the variable power generation doesn't meet the variable demand. If we can trick those mega-companies into lobbying the government to allow nuclear fuel recycling, we'll be all set to ditch fossil fuels fairly quickly. (provided they also lobby to streamline the permitting process and reverse the DOGE gutting of the government agency that provides all of the startup loans used for nuclear power plants.)

[–] [email protected] 27 points 10 hours ago (1 children)

The technology side of generative AI is fine. It's interesting and promising technology.

The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

We need legislation to catch up. We also need society to be able to catch up. We can't let the AI bros continue to foist more "helpful tools" on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

load more comments (1 replies)
[–] [email protected] 10 points 8 hours ago

Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.

[–] [email protected] 12 points 10 hours ago* (last edited 10 hours ago) (1 children)

I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk

[–] [email protected] 22 points 12 hours ago (1 children)

Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.

[–] [email protected] 1 points 3 hours ago

Ya know what? Forget the ai criteria let's just have this for all billionaires

[–] [email protected] 18 points 12 hours ago (1 children)

I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

load more comments (1 replies)
[–] [email protected] 29 points 14 hours ago

TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.

ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..

So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

[–] [email protected] 9 points 11 hours ago

I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

load more comments
view more: next ›