this post was submitted on 18 May 2025
126 points (94.4% liked)

Ask Lemmy

31716 readers
1881 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 40 points 13 hours ago (4 children)

I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

[–] [email protected] 1 points 4 hours ago

This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.

How does it all get enforced? FTC? How does this become reality?

[–] [email protected] 13 points 12 hours ago (1 children)

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Their creators can't even keep them from deliberately lying.

[–] [email protected] 6 points 11 hours ago
[–] [email protected] 12 points 12 hours ago

Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

[–] [email protected] 5 points 12 hours ago (1 children)

... I want clear evidence that the LLM ... will never hallucinate or make something up.

Nothing else you listed matters: That one reduces to "Ban all Generative AI". Actually worse than that, it's "Ban all machine learning models".

[–] [email protected] 4 points 6 hours ago* (last edited 6 hours ago) (1 children)

If "they have to use good data and actually fact check what they say to people" kills "all machine leaning models" then it's a death they deserve.

The fact is that you can do the above, it's just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the "answer to everything machine."

[–] [email protected] 2 points 55 minutes ago

The way generative AI works means no matter how good the data it's still gonna bullshit and lie, it won't "know" if it knows something or not. It's a chaotic process, no ML algorithm has ever produced 100% correct results.