this post was submitted on 27 Aug 2023
282 points (97.6% liked)
Rust
5979 readers
130 users here now
Welcome to the Rust community! This is a place to discuss about the Rust programming language.
Wormhole
Credits
- The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Mod actions could be cryptographically signed using a private keys, and the public keys of the mods would be part of each community's metadata, updated in a way that establishes a chain of custody so only existing mods can add new mods. Each instance would independently verify that mod actions come from a legitimate mod. (I think I basically just described an implementation of NTFs representing mod privileges, BTW.)
I prefer it to have multiple mods, ideally a majority. That way, you can't have one mod "go rogue" and add a bunch of alts or whatever, which also means a mod account getting compromised can't "go rogue."
I'm less concerned about mod actions like deleting posts, banning users, etc, since mod actions should always be able to be rolled back since most decentralized systems use immutable data (so a mod action is merely data that instructs clients to ignore or prefer certain other data). However, I don't want a situation where mods become powerless because one of their accounts got compromised.
I'm not concerned here about the rules for how mods are added or removed, just the technical implementation. It's easy enough to require a majority for decisions like that.
There has to be a way to establish with certainty that a user taking mod actions is actually a mod. The fact that you can revert changes in a git repo doesn't make it ok for people to commit without permission, and mod actions are the same. Just allowing unauthorized users to perform mod actions would allow them to fuck up communities faster than the real mods could undo the damage.
Yeah, I agree. All mod actions should be signed by their cryptographic key, and moderator cryptographic keys should probably be separate and stronger than regular user cryptographic keys. Mod actions should be rare enough that this isn't a burden to verify.
One thing I'm not as sure about is if the data is decentralized as well, users would potentially be liable for illegal content. I suppose there could be a system that moderator-removed content gets removed from all regular users' devices, so maybe that's good enough. But then that makes auditing mod actions difficult, since the original data could be much harder to get.
A lot of these problems aren't really technical, but rather UX when designing for a fully decentralized system.
That sounds like a blockchain with signature verification against a previously established and acknowledged set of keys as consensus mechanism. Pretty reasonable, as far as use cases go.
However, it doesn't solve the issue of disagreements and community splitting. If one part of the mod team decides to add another mod, but the rest doesn't, what's to prevent that part from splitting off and continuing their own version of the moderation chain? How is abuse of power handled? And in case of a split, how are community members informed?
Don't get me wrong, I'm not saying it's a poor idea, I'm just saying that it won't solve the issues of community splits, and I'm not sure anything ever can.
I wasn't trying to solve that particular problem, on the assumption that it has already been solved and the same solution can be adapted to the implement I proposed. Someone else who replied to me suggested something like requiring majority approval to add or remove a mod.
Another possibility is for the creator of a community to be a super mod, who can add or remove regular mods, or transfer their super mod status to someone else. That scheme could easily be generalized to allow multiple super mods, or to include a whole hierarchy of mods for large communities.