this post was submitted on 25 Aug 2024
36 points (97.4% liked)
Asklemmy
44142 readers
1101 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sadly, you cannot. If you have a platform that's open for everyone to participate in, that includes bad actors.
You could attempt to mitigate this by having communities filled with bots just creating LLM content, so when they scrape the data they can't tell if it's human or not. And that would hurt their data set
It would be just a matter of time before they can distinguish between good and bad data; there are already AI that can do just that. I'd like to do something like that on GitHub though:P
It's kind of moot. If you have the capability of distinguishing good and bad training data, you no longer need your training data.
And quite frankly we would be at general AI levels of technology, it'll come eventually, but not for a while, a good long while