Would definitely recommend furry.engineer - but if that's not your jam, pawb.fun is run by the same team. Reg is open for both with approval required, but the approval is just reading the rules and telling them a little about yourself. If they've approved you for Pawb Social, link that in the box, and they'll probably approve you for the Mastodon instance too.
cosmo
Funny how this came out when there's been a renewed push for backdoors in cryptography. They all seem to forget that all it'd take for an adversary to get in is for them to find the backdoor... Sadly this kind of thing is pretty common in the radio sphere - the "basic" encryption (better called 'privacy code') on DMR radios is often one of 16 or 256 different codes, and the next step up is 40-bit ARCFOUR. For AES, you have to pay through the nose for software licences, and most users won't or can't bear the costs. The only good news is the higher-tier algorithms like TEA2/TEA3 weren't vulnerable - and they're more likely the ones in use by emergency services.
Next up, X rebrands to Wayland.
Well, I guess it's X-rated now. Can't tweet at work any more, it's the law.
I've upvoted this but I'd just like to chuck in that I think Raven makes a lot of sense here. I've had posts deleted or hidden by automod bots on other sites and even when they're restored they don't get as much traction as the posts which were left alone. So there's an effect even if the action can be "reversed" - and I say that in quotes because it's not like you can turn the clock back.
Hard agree on the no use of shadowbans and keeping users informed, and the easy escalation to a human.
My ideal would be some kind of system which looks at the public feed for keywords and raises anything of concern to an admin, and maybe the admin's response goes back in as 'training'. Something more like SpamAssassin's Bayesian ham/spam classifier perhaps.
I don't think automated actions without a human in the loop is the right way to go - and I have grave concerns about biases creeping into the model over time. The poster child for this is pretty much Amazon's HR resume' review system ended up with racist biases. There's been a lot of good progress improving PoC/BIPOC/BAME/non-white acceptance and it'd be a shame if something like this accidentally ended up scarring or undoing some of that.