this post was submitted on 13 Jan 2024
55 points (87.7% liked)

Fediverse

27910 readers
4 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

Though Lemmy and Mastodon are public sites, and their structures are open-source I guess? (I'm not a programmer/coder), can they really dodge the ability of AI s to collect/track any data everytime they search everywhere on Internet?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 10 months ago (1 children)

Those "@-@ tailed jackrabbits" in your link made me laugh. Emoticons in species names? Why not?

I think that we could minimise the loss of integrity if the data is "contained" in a way that your typical user wouldn't see it but bots would still retrieve it for model training.

And we don't need to restrict ourselves to use LLM-sourced data for that. The model collapse boils down to the amount of garbage piling up over time; if we use plain garbage we can make it even worse, as long as the garbage isn't detected as such.

[–] [email protected] 2 points 10 months ago

Yeah as an ecologist that same thing made me giggle. I suppose why not the lesser-spotted 🍆warbler :P

In terms of exposing it only to bots, that is a frustration, unless you make it seamless then it does become kinda trivial to mitigate. Otherwise the approach I'd take to mitigate it is to adapt a lemmy client that already does the filtering or reverse-engineer the deciding element of the app. Similarly if you use garbage then you need it to look enough like normal words for it to be hard to classify as AI generated.

The funny thing is that LLMs are not actually much good at telling whether something is ai generated, you need to train another model to do that, but to train that ai you need good sources of non-corrupt data. Also the whole point of generative AI language models is that they are actively trying to pass that test by design so it becomes an arms race that they can never really win!

Man, what a shitshow generative ai is