Make AIs OpenSource by law.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
(Ignoring all the stolen work to train the models for a minute)
It's got its uses and potential, things like translations, writing prompts, or a research tool.
But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.
The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.
Lastly the AI relationships are not good.
Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.
The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.
As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.
When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)
Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.
Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.
If AI is here to stay, we must regulate what slice of the planet we're willing to give it. I mean, AI is cool and all, and it's been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn't really worth the energy consumption that it currently requires.
A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can't be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI's datacenter, Amazon and Google's investments into nuclear). It's going to get worse. They'll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.
We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We're thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.
The technology side of generative AI is fine. It's interesting and promising technology.
The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.
We need legislation to catch up. We also need society to be able to catch up. We can't let the AI bros continue to foist more "helpful tools" on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.
I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.
Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?
I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!
It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk
I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.
Have you tried filtering, translating, or summarizing your inbox through AI? /s
I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that
Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.
More regulation, supervised development, laws limiting training data to be consensual.
TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.
ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..
So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.
Ban it until the hard problem of consciousness is solved.
I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.
I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.
I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.
Rename it to LLMs, because that's that it is. When the hype label is gone, it won't get shoved into everywhere for shits and giggles and be used for stuff it's actually useful for.
I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.
This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.
Just Mass public hangings of tech Bros.
2 chicks at the same time.
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I'd love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don't love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I'm not sure it's enough to really matter. I think we need bigger "what if" scenarios to handle that.
If we're going pie in the sky I would want to see any models built on work they didn't obtain permission for to be shut down.
Failing that, any models built on stolen work should be released to the public for free.
Definitely need copyright laws. What if everything has to be watermarked in some way and it’s illegal to use AI generated content for commercial use unless permitted by creators?
They have to pay for every copyrighted material used in the entire models whenever the AI is queried.
They are only allowed to use data that people opt into providing.
I would make a case for creation of datasets by a international institution like the UNESCO. The used data would be representative for world culture, and creation of the datasets would have to be sponsored by whoever wants to create models out of it, so that licencing fees can be paid to creators. If you wanted to make your mark on global culture, you would have an incentive to offer training data to UNESCO.
I know, that would be idealistic and fair to everyone. No way this would fly in our age.
This definitely relates to moral concerns. Are there other examples like this of a company that is allowed to profit off of other people’s content without paying or citing them?
There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.
We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.
For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.
AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.
There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.
Here are further resources if you didn't get enough ranting.
I love the passion! Was this always our fate? Can we not adapt like we have so many times in human history?