this post was submitted on 27 Nov 2024
27 points (84.6% liked)

Lemmy.ca Support / Questions

490 readers
25 users here now

Support / Questions specific to lemmy.ca.

For support / questions related to the lemmy software itself, go to [email protected]

founded 4 years ago
MODERATORS
 

[email protected]

Seems to be purely to post misinformation with repeated claims that Russia is innocent and the US caused the Ukraine situation, that they're stopping Ukraine from agreeing to Russia's super amazing peacedeals, etc.

This is the sort of garbage one would expect to find on ML or Hex, is CA intended to be the same low quality instance?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 day ago (1 children)

I do not understand people here defending misinformation/intolerance as a merit of discussion. The dichotomy of naive or complicity.

People spreading misinformation and intolerance are not here for healthy arguments, you just need to check their history to see their dishonesty and ill temper.

In the meanwhile, accounts like the one OP highlighted are just creating trouble for mods of other instances to solve.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

The problem is, who is the arbiter of that? There are essentially 3 types of moderation styles here:

Laissez-faire: Let people do whatever as long as it doesn't actively hurt anyone. People can govern themselves and serious incidents are expected to be reported and dealt with. Some jerks will tiptoe around the rules but will eventually get caught. Lemm.ee, lemmy.ca and some others follow this.

Casual enforcement of admin-philosophy: Most topics outside of politically contentious ones are not strictly monitored. Mods/admins will root out communities, comments and posts that actively go against the narrative, particularly on threads on political topics like Ukraine, Palestine, etc.. Lemmy.world and lemmy.ml follow this.

Strict enforcement of admin-philosophy: do not tolerate any potentially harmful statements (to that instance's narrative or vibe). Any violation will be removed and repeated violations get you banned. This philosophy can be reasonable like Beehaw.org, which I think works very well for them and makes it a welcoming safe space, because there is no tolerance for bigotry and jerks. It can also be unreasonable like lemmygrad.ml, where dissent to the pro-Russian narrative is swiftly dealt with.

Admins of other instances should ban users that go against their philosophy from reaching their servers, if they follow the latter two styles of moderation. That's how it is with federation, sometimes different instances have conflicting philosophies (the vegan one for example). It's up to each admin to decide whether a foreign Fediverse user belongs in their kingdom. The moderation style that lemmy.ca has lets it be a good neutral place to discuss various drama and lore from other servers.

[–] [email protected] 4 points 1 day ago (1 children)

The problem is, who is the arbiter of that?

Intolerance is well-defined in many languages, and, so people do not confuse I am talking about milk intolerance, the hate crime is defined in many law codes across the globe, including Canada. There is no need for philosophical discussion of what is "intolerance".

There is no need for a linguistic expert to realize someone's discourse is ill intentioned, when the semantics of "the victim deserves to suffer" is the same as the call to action.

For countries that depend on common law, the account in question was already punished in other instances, creating precedent.

The modus operandi of these kinds of accounts are also well-know and documented. And popularity contests should not be a tool to define what is right in an online platform where there is no real accountability. How many upvotes do you think a single worker in a troll farm can generate in a couple of minutes?

We should not depend on admin humour for results (philosophies, as you suggest), but I agree that we should help when/where we can, their volunteer work is invaluable for the health of the instance.

I think that the discussions worth having in these kinds of posts are about methods, checks and balance to prevent bad decisions from people in power, and that people will be fairly treated.

Methods are many, and there are many examples out there.

  • would twitter like community notes solve some of these problems or create more? Would lemmy repo accept such PR?
  • the problem of twitter x Brazil: is it worth locking those accounts while an investigation is pending? One of them was instigating machete attacks in school/nursery. When would this lock be ok, or not ok?
  • how long should people complain/report before a something (an investigation, a lock, or a conclusion) happens. - The account we both mentioned not in this thread (but in this post) went on it for 2 months before being banned - they did not leave on their own. ...
[–] [email protected] -1 points 21 hours ago (1 children)

Sure, we should not tolerate intolerance, "No Bigotry" is rule #1 here so if you see that then please report it. Misinformation, though? That's the main thing OP is talking about and they gave a few examples, they are propaganda but not intolerance.

[–] [email protected] 1 points 5 hours ago* (last edited 5 hours ago)

I feel like you are arguing with me about OP points. I am not sure if it is a Lemmy error, but my comment you replied to first

I do not understand people here defending misinformation/intolerance as a merit of discussion. The dichotomy of naive or complicity.

People spreading misinformation and intolerance are not here for healthy arguments, you just need to check their history to see their dishonesty and ill temper.

In the meanwhile, accounts like the one OP highlighted are just creating trouble for mods of other instances to solve.

I don't feel like you are here defending that person's acts or being complicit, neither trying to defend misinformation/intolerant with malicious intent, or being disingenuous with semantics. So, for the healthy discussion, I continue.

You don't need to go too far in that person's history to see the examples of their dishonesty and ill temper, if that is the hill you chose to defend. You might need some special privilege to see their removed content in other instances.

From your message, sorry if I am mistaken your words the first time, but I imagine now that by that you were not saying intolerance, but misinformation, as in:

who is the arbiter of "misinformation"

In that case,

Canada might be a little behind on misinformation laws, it was always behind when the subject involves technology. But they define very well their types (MDM they call), qualify damages and campaign to raise awareness and minimize its effects. https://www.cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300 https://www.canada.ca/en/campaign/online-disinformation.html

"Misinformation" is serious, causes harm, and should not be used interchangeably with "agreement".

Just because OP is complaining about misinformation, it does not make it any less severe than intolerance, when used for the same goal - to cause harm.

Even before technology, we had laws and procedures about harmful discourse, be them intolerance or misinformation, it just makes things different.

That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.

Edit: bold line