this post was submitted on 19 Jan 2025
8 points (100.0% liked)
Anti-social media
81 readers
19 users here now
Dedicated to antisocial behavior of social media corporations, censorship, algorithmic bias, filter bubbles, privacy and psychological effects of mainstream social media.
founded 1 week ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Any system that is rules-based, whether human-run or algorithmic, can be gamed because rules cannot be made without flaws that people can game.
Algorithmic systems, however, lack any actual comprehension and thus will be far easier to abuse. As an example, at Farcebook, back in 2015 (when I still had an account) I got a post removed by an automated reviewer and a note placed on my account for "threats of violence". The "threat"? Someone asked me how to do something and I replied with "I could tell you but then I'd have to kill you".
No human reading that would see that as a genuine threat.
Unless English was their second language. FB outsources a lot of their moderation to Africa, Philippines, etc
I'm pretty sure this was moderated by machine. This was right when they started bragging about their automated moderation. It's possible that this was a language barrier issue, but it didn't feel right.