this post was submitted on 17 Jun 2024
3 points (61.5% liked)

Meta (slrpnk.net)

595 readers
4 users here now

Here we can discuss anything about this Lemmy instance/server itself.

Our XMPP support chat: Movim or XMPP client.

Please also refer to our Wiki

founded 2 years ago
MODERATORS
 

There is clearly a problem that most of the politics and news communities on Lemmy are unpleasant places to take part in discussion. People yell at each other. The tone of disagreements is that of saying what your opinion is, and insulting the other person if they don't agree with your opinion, or a bunch of people giving quick one-off statements like "well I think it's this way" or "no you're wrong" which adds nothing. I've heard more than one person say that they simply don't participate in politics or news communities because of it.

Well, behold:

I have made some technology which attempts to take a much heavier handed approach to moderation, by just detecting assholes or people who aren't really contributing to the conversation, in other communities, and just banning them pre emptively en masse. In its current form, it bans about half of hexbear and lemmygrad, and almost all of the users on lemmy.world who post a nonstop stream of obnoxiously partisan content. You know the ones.

In practice it's basically a whitelist for posting that's easy to get on: Just don't be a dick.

I'd like to try the experiment of having a ~~political~~ community with this software running the banlist, and see how it works in practice, and maybe expand it to a news community that runs the same way. There's nothing partisan about the filtering. You can have whatever opinion you want. You just can't be unproductive or an asshole about the way you say your opinion. And the bans aren't permanent, they are transient based on the user's recent past behavior.

(Edit: I think making a general news community might fit better with slrpnk than politics. In thinking about it and talking with people, I think electoral politics just doesn't belong in the slrpnk feed, but maybe general news specifically with the political bickering that comes along with it being muted, would be a positive for the instance at the same time as I get to test out my little software project.)

I don't want to explain in too much detail how the tech works, because I think some segment of assholes will want to evade the tech to come into the community and be assholes again. But I'd also want to set up a meta community where anyone who did get banned can ask questions or make complaints about it. (As long as that offering doesn't turn into too much of a shit show that is.)

Is slrpnk a place where a little experiment like this could find a good home? What does everyone think of the idea?

top 15 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 4 months ago (1 children)

We are defederated from Lemmygrad and Hexbear so at least that part of the Lemmyverse wouldn't be able to participate if the community was hosted on SLRPNK.

[–] [email protected] 2 points 4 months ago (1 children)

I know, I was just trying to give a frame of reference for what the level of ban-worthiness would be.

So you're okay if I try this experiment? Looking now at how it might play out, I admit I'm having second thoughts about whether it's even a good fit for this instance. Maybe something would be better like "pleasant news," where people can post news stories even about political or geopolitical topics, but the actors who like to turn the comments into a war zone are removed to a much lower level. Tell me what you think, though, and I also want to think about it a little bit more.

[–] [email protected] 4 points 4 months ago* (last edited 4 months ago) (1 children)

I would like to discuss this with the other admins first. To be honest I am a bit sceptical as I don't think what you want to do can be really done as even if you think you are being unbiased in your banning criteria, it is basically impossible to do so. And as a result you will likely get an endless number of "polite" concern trolls that will test how far they can go.

[–] [email protected] 3 points 4 months ago

Yes. That probing on the part of bad actors is part of why I don't want to explain anything about how it works even though that raises massive transparency questions. I'm happy to point out a message to any particular person who has a question and say "Here is the kind of thing you did, that you can't do anymore if you want to post here," but I definitely don't want to draw out a little roadmap for how to trick the bot.

Mostly the process is for the 95% of people that it is fine with to just talk as they want to, and for anyone that's in the 5% to have an avenue to ask reasonable questions, and then run the experiment, and see what happens.

And yes, I'll certainly abide by whatever your decision is about whether this is the place to try it out. Making it about news in general (bringing that to slrpnk without the bickering that comes with it whenever anything political comes in) sounds like it might be a real positive for the instance. Making it about politics (as I did in my original pitch), now that I think about it, sounds a little bit wrong. But let me know what you and everyone thinks.

[–] [email protected] 7 points 4 months ago* (last edited 4 months ago) (2 children)

I have concerns about your vision of an ideal community, and I'm cynical of how far technical means can go in achieving that vision, but those concerns are overwhelmed by my support for experimentation. I agree with the prevailing opinion that moderation on Lemmy is hamstrung by a lack of adequate tools. Your project, even if it fails to achieve your vision, could serve as a stepping stone to some future success.

My primary concern is that you may be filtering people into whitelists and blacklists by feeding their comment history with a prompt into a Large Language Model like ChatGPT. If that's the case, it is a deal-breaker. You cannot submit content via an LLM API and also avoid having that text absorbed by the model as training data. Since you would be submitting the comments of other people, this violates the principles of respect and consent. Many people exited corporate social media for Lemmy to protest this hoovering of their data by 'AI' companies; while some have gone as far as to add an anti-AI clause as a comment footer, it should be assumed that every Lemmy commenter does not consent to their intellectual labor being exploited for the profit of tech capitalists unless they explicitly state otherwise. If SLRPNK endorsed a moderation tool that abused other Lemmy users in this way, we would quickly become a pariah instance.

When it comes to software, I'm a fan of transparency. I hope at some point you're willing to share your code, though I acknowledge your reasons for keeping it obscure. I would advise you to be open at least about the mechanism your filter uses while hiding your parameters if you can, so that you can alleviate any concerns that your code is feeding Lemmy comments to an LLM.

[–] [email protected] 2 points 4 months ago

Perfectly reasonable. It's not feeding any users' comments into any LLM public API like OpenAI that might use them for training the model in the future. As a matter of face it's not communicating with any API or web service, just self contained on the machine that runs it.

As far as transparency, I completely get it. I would hope that the offer to point to specific reasons for any user that wants to ask questions about why they can't post will help to alleviate that, but it won't make it completely go away. Especially because as I said, I'm expecting that it will get its decisions wrong some small percentage of the time. I just know there's an arms race between moderation tooling and people trying to get around the moderation tooling, and I don't want to give the bad actors any legs up in that competition even if there are very valid reasons for it in terms of giving people reasons to trust that the system is honest.

[–] [email protected] 2 points 4 months ago

Other things that have occurred to me in the meantime:

  1. I'm fine with explaining how it works to one of the slrpnk admins in confidence. We can get in Matrix, I can show the code and some explanation, and depending on how it goes I might even be fine giving access to the same introspection tools I use, to examine in detail going forward why it made some particular decision and if it's on the right track. The point is not that I'm the only one who's allowed to understand it, just that I don't want it to become common knowledge.
  2. I'm not excited to be a "full time" moderator, for reasons of time investment and responsibility level. Just like with [email protected], I want to be able to create this community because I think it is important, not necessarily to "run it" so to speak. My preferred perfect trajectory in the long run is that it becomes a tool that people can use to automate moderation for their own communities, if it can prove useful, instead of just being used by me to run my own little empire. I just happen to think that this type of bad-actor-resistant political community would be a great thing on its own, as well as a good test of this automated approach to moderation of communities political and otherwise.
[–] [email protected] 5 points 4 months ago (1 children)

First and foremost, I like the spirit and sounds fun (I am probably banned tho)

I think banning people before they have the chance to show that they can follow community rules is not the way I would do it, but its also not something I deeply care about.

You can have whatever opinion you want

I also think on slrpnk.net you are restricted by the server rules, so you can be the most pleasant racist there is and would still break the server rules and thus get banned.

[–] [email protected] 2 points 4 months ago

You are not banned. The number of users from slrpnk that are banned is very small.

"Ban" is not quite the right word, since it's always flexible to current behavior. Maybe that is me trying to whitewash my self propaganda about how good an idea it is, but I pictured it more as this model: Whatever user in question has not met the bar of productive discussion to be let in, at the present time.

Maybe the bot should be called elitistbot.

And yes, if you are being racist or something, the bot is not needed and the mods and admins would give you an actual ban of the permanent kind. This is about detecting misbehavior at a more subtle and forgivable level than that, and reacting to it with a more temporary action.

[–] [email protected] 5 points 4 months ago* (last edited 4 months ago) (1 children)

I have some questions::

  1. Will there be discussion before banning or only after banning?

  2. Will the ban system be reviewed regularly and by whom?

  3. Are you open to discussing the technology you claim to have built for this? In my opinion, denying transparency and relying on security by obscurity of a closed-source algorithm makes me question the algorithm and also reminds me of moderation on Meta and YouTube.

  4. Have you attempted this method of tone policing with manual moderation in any communities first? If so, how did it go?

  5. Is this post satire?

[–] [email protected] 3 points 4 months ago (1 children)

My vision is that if some person is unable to post, and wants to post asking why, I can give them some sort of answer (similar to what I said to Alice in another message here). The ban decision is never permanent, either, it's just based on the user's recent and overall posting history. If you want to be on the whitelist, there's specific guidance on what you "did wrong" so to speak, and if you decide the whole thing is some mod overreach one viewpoint whitewash and you want no part of it, that's okay too. My hope is that it winds up being a pleasant place to discuss politics without being oppressive to anyone's freedom of speech or coming across as arbitrary or bad, but that is why I want to try the experiment. Maybe the bot in practice turns out to be a capricious asshole and people decide that it (and me) are not worth dealing with.

The whole model is more of a private club model (we'll let you in but you have to be nice), different from the current moderation model. The current implementation would want to exclude about 200 users altogether. Most are from lemmy.world or lemmy.ml (And 3 from slrpnk. I haven't investigated what those people did that it didn't like.)

Specific answers to your questions:

  1. Only after. The scale means it would be unworkable to try to talk to every single person before. Transparency of talking to people after, if they wanted to post and found out they couldn't, I think is an important part.
  2. I think necessarily yes. I envision a community which is specifically for ban complaints and explanations for people who want them, although maybe that would develop as a big time sink and anger magnet. I would hope that after a while people have trust that it's not just me secretly making a list of people I don't like, or something, and then that type of thing would quiet down, but in the beginning it has to be that way for there to be any level of trust, if I'm trying to keep the algorithm a secret.
  3. It's a fair question. Explaining how the current model works exposes some ways to game the system and post some obnoxious content without the bot keeping you out. But, I like the current model's performance at this difficult task. So I want to keep the way it works now and keep it secret. I realize that's unsatisfying of course. I'm not categorically opposed to the idea of publishing the whole details, even making it open source, so people can have transparency, and then if people are putting in the effort to dodge around it then we deal with that as it comes.
  4. None.
  5. Not at all.

I thought about calling the bot "unfairbot", just to prime people for the idea that it's going to make unfair decisions sometimes. Part of the idea is that because it's not a person making personal decisions, it can be much more heavy handed at tone policing than any human moderator could be without being a total raging oppressive jerk.

[–] [email protected] 2 points 4 months ago (1 children)

Can you please comment on:

  1. What programming and/or scripting languages are used in your tool
  2. Whether is uses an LLM
  3. How the algorithm functions from a high level
  4. What user data is stored on your machine
  5. If 4 applies, then any measures taken to secure that data and maintain privacy.

My intention is not to be pedantic, but to learn more about your proposed solution. I do appreciate your thoughtful answers in the comments here.

[–] [email protected] 1 points 4 months ago

I don't want to go into any detail on how it works. Your message did inspire me, though, to offer to explain and demonstrate it for one of the admins so there isn't this air of secrecy. The point is that I don't want the details to be public and make it easier to develop ways around it, not that I'm the only one who is allowed to know what it is doing.

I'll say that it draws all its data from the live database of a normal instance, so it's not fetching or storing any data other than what every other Lemmy instance does anyway. It doesn't even keep its own data aside from a little stored scratch pad of its judgements, and it doesn't feed comment data to any public APIs in a way that would give users' comments over to be used as training data by God knows who.

[–] [email protected] 3 points 4 months ago (1 children)

A good number of reddit subs related to politics used very very heavy moderation to keep bots out. Many required a certain number of karma, time on reddit or similar to post on in the first place.. It did not alwawys work and can lead to bubbles. Obviously so can just insulting other users. I would give it a try with some controversial memes. Something like Biden and Trump are the same. That usually gets some really bad discussions.

Setting up a community to explain bans is not needed. The mods of a community are public, so it is easy to just message them.

[–] [email protected] 2 points 4 months ago

Yes, this is an attempt at something similar. I think the reality is that when things grow beyond a certain size you have to do some automated moderation things or else it gets overwhelming for the mods. This is an attempt at a new model for that, since I think human moderation of everything has a couple of different flaws, and some of the automated things reddit did had glaring flaws.