this post was submitted on 28 Aug 2023
1746 points (97.9% liked)
Lemmy.World Announcements
28381 readers
1 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's one of the few things Reddit handles the situation better by being a centralized entity with a dedicated workforce filtering out these content. It's a shame it has to be this way, but I understand why it has to be done.
So, Mastodon has this same problem?
Pretty much. I recently had my mastodon feed spammed with racist, homophobic, and gore-filled posts just because they would post with a list of unrelated hashtags. You could keep blocking the poster or the instance but they would pop back up from another instance or with another account. It eventually stopped but I'm sure it'll happen again. You're apparently able to filter out certain offensive terms with a filter but I think you have to manually enter the terms yourself.
Twitter had that problem in the beginning, people forget that. I've seen some shitty stuff on Reddit as well and reported it, it's a problem everywhere.
There have been issues in the larger instances with slow or unresponsive moderation, leading to occasional bursts of bot activity
Pretty sure it does, actually
I don't use it, so I can't answer that.
Yep. It's why I curate my feed very carefully and am very quick with the "block" button.
Someone has never heard of /r/jailbait
That's because Reddit chose to leave it up until the media reported on it, though.
That said, it's really hard to protect against a dedicated, targeted attack. Eg, stuff like captchas can make it harder to create accounts, but think about how fast you could make accounts manually if you wanted to. You don't need thousands of accounts to cause mayhem. Even a few dozen can cause serious problems. I think a lot of the internet depends on the general good will of most users. Plus the threat of legal action if they get caught (but that basically requires depending on police and we know police aren't dependable).
One thing Reddit had that I'm not sure Lemmy does (never heard mentions of it) is the option to require all posts and comments to be approved by a mod before it's visible. That might even have just been an automod thing combined with how Reddit let admins hide and unhide comments. But even if they were to use that, it's not fair for volunteer mode to have to deal with that. It's also sooo much work. You can't just approve posts, cause attackers will use comments. And you have to approve edits or attackers will post something innocent and then edit it to be malicious. And even without an edit, they can link to an image and then change the file itself to a different one (checksums could prevent that, but it's more work and it's a constant battle against malice).
I mean, that's reddit prehistory at this point.