this post was submitted on 20 Oct 2023
1350 points (100.0% liked)

196

16512 readers
2809 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 1 year ago (14 children)

One thing I've started to think about for some reason is the problem of using AI to detect child porn. In order to create such a model, you need actual child porn to train it on, which raises a lot of ethical questions.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

You absolutely do not real CSAM in the dataset for an AI to detect it.

It's pretty genius actually: just like you can make the AI create an image with prompts, you can get prompts from an existing image.

An AI detecting CSAM would have to be trained on nudity and on children separately. If an image-to-prompts conversion results in "children" AND "nudity", it is very likely the image was of a naked child.

This has a high false positive rate, because non-sexual nude images of children, which quite a few parents have (like images of their child bathing) would be flagged by this AI. However, the false negative rate is incredibly low.

It therefore suffices for an upload filter for social media but not for reporting to law enforcement.

[–] [email protected] 3 points 1 year ago

This dude isn't even whining about the false positives, they're complaining that it would require a repository of CP to train the model. Which yes, some are certainly being trained with the real deal. But with law enforcement and tech companies already having massive amounts of CP for legal reasons, why the fuck is there even an issue with having an AI do something with it? We already have to train mods on what CP looks like, there is no reason its more moral to put a human through this than a machine.

load more comments (12 replies)