this post was submitted on 16 Jun 2023
10 points (100.0% liked)

Fediverse

17698 readers
3 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

anonymity and privacy seem to come at odds with a social platform's ability to moderate content and control spam.

If users have sufficient privacy and anonymity, then they can simply use another identity to come back, or use multiple identities.

Are there ways around this? It seems that any method of ensuring that a banned user is kept off the platform would necessitate the platform knowing information about the user and their identity

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (2 children)

Anonymity/Privacy are not inherently universal. Your true identity can be known to some and unknown to others in this case masked via an alias.

Thus, I propose a hypothetical arrangement: separating Content Instances and Identity Instances.

Content Instances host the main communities and discussions. There must still be "many" (hundreds maybe even thousands) of these so that none can wield power of the others.

Within Identity Instances you are known or at least verified and vetted. External to the Identity Instance a user is only known as their alias from the identity instance. There should be many more of these with a maximum user size ~100 (see Dunbar's number).

Further, federation should not be open by default. New Identity Instances are quarantined initially with the ability to subscribe to communities on Content Instances, but the posts and comments from the Identity Instances are not federated back to the Content Instances.

The goal here is to employ a heavily distributed Divide & Conquer approach to moderation and community management. The users of an Identity Instance are responsible to one another as any of their actions may cause the entire Instance's users to be affected (i.e. defederation). Even better if you know each other, you should feel some real social pressure that your actions online will impact your social life IRL.

But to be honest and pragmatic, I don't think this will form organically nor do I think it could be enforced. And even in practice it probably wouldn't work. But perhaps it's a nice dream.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I think something like this could occur. Something I kicked around my local, city community was the possibility of our local non-profit ISP (National Capital Freenet in Ottawa, Canada) hosting an instance. In practice, it would likely be an identity instance more than anything else. It would likely require membership, so a) there's a donation required, which is fine, NCF is a good group, and b) they do need your actual identity, because part of their membership involves an agreement to certain conduct.

NCF is something of a relic from an earlier internet age in many respects, but this kind of thing still exists elsewhere. Maybe this is a role other such organizations can take on, both increasing their relevance and adding another layer of accountability on users re: not being shitheads.

Idk, something to think about.

Edit to acknowledge I'm not a member of NCF right now and have no involvement with them. I just think they're neat and this could be a neat thing for them to do for my city's residents.

[–] [email protected] 1 points 1 year ago (1 children)

New Identity Instances are quarantined initially

What would the process be for an identity instance to become trusted? Like would you need to get approval from multiple other identity instances or something?

[–] [email protected] 2 points 1 year ago

I can't say what it should be. I'd argue that each Content Instance should have it's own path to becoming trusted. An example could be: demonstrating quality post/comment content during the quarantine period.