Follow up question -- I'm not OP but I'm another not-really-new developer (5 years professional xp) that has 0 experience working with others:
I have trouble understanding where to go on the spectrum of "light touch" and "doing a really good job". (Tldr) How should a contributor gauge whether to make big changes to "do it right" or to do it a little hacky just to get the job done?
For example, I wanted to make a dark mode for a site i use, so i pulled the sites's repo down and got into it.
The CSS was a mess. I've done dark modes for a bunch of my own projects, and I basically just assign variables (--foreground-color, --background-color), and then swap their assignments by the presence or absence of a ".dark-mode" class in the body tag.
But the site had like 30 shades of every color, like, imperceptibly different shades of red or green. My guess was the person used a color picker and just eyeballed it.
If the site was mine, I would normalize them all but there was such a range -- some being more than 10-15% different from each other -- so i tried to strike a balance in my normalization. I felt unsure whether this was done by someone who just doesn't give a crap about color/CSS or if it was carefully considered color selection.
My PR wasn't accepted (though the devs had said in discord that i could/should submit a PR for it). I don't mind that it wasn't accepted, but i just don't know why. I don't want to accidentally step on toes or to violate dev culture norms.
I guess my question is, why would anyone continue to "consume" -- or create -- real csam? If fake and real are both illegal, but one involves minimal risk and 0 children, the only reason to create real csam is for the cruelty -- and while I'm sure there's a market for that, it's got to be a much smaller market. My guess is the vast majority of "consumers" of this content would opt for the fake stuff if it took some of the risk off the table.
I can't imagine a world where we didn't ban ai generated csam, like, imagine being a politician and explaining that policy to your constituents. It's just not happening. And i get the core point of that kind of legislation -- the whole concept of csam needs the aura of prosecution to keep it from being normalized -- and normalization would embolden worse crimes. But imagine if ai made real csam too much trouble to produce.
AI generated csam could put real csam out of business. If possession of fake csam had a lesser penalty than the real thing, the real stuff would be much harder to share, much less monetize. I don't think we have the data to confirm this but my guess is that most pedophiles aren't sociopaths and recognize their desires are wrong, and if you gave them a way to deal with it that didn't actually hurt chicken, that would be huge. And you could seriously throw the book at anyone still going after the real thing when ai content exists.
Obviously that was supposed to be children not chicken but my phone preferred chicken and I'm leaving it.