this post was submitted on 11 Dec 2023
439 points (93.0% liked)

Technology

58303 readers
10 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Politically-engaged Redditors tend to be more toxic -- even in non-political subreddits::A new study links partisan activity on the Internet to widespread online toxicity, revealing that politically-engaged users exhibit uncivil behavior even in non-political discussions. The findings are based on an analysis of hundreds of millions of comments from over 6.3 million Reddit users.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 11 months ago* (last edited 11 months ago) (2 children)

Really curious about the tool they used to quantify "toxicity/disruptive" comments. My initial suspicion would be that political commentary, regardless of human-perceived toxicity, might be biased toward "toxic" by an automated sentiment analysis.

In short: I am suspicious that automated tooling exists to reliably distinguish between toxic and non-toxic political discourse.

[–] [email protected] 9 points 11 months ago* (last edited 11 months ago) (1 children)

We also have to deal with the fact that toxicity has become an almost meaningless label. The way we seem to apply it now, feels like we'd say there was a lot of "toxicity" around the time of the Civil Rights Movement, too. Or even the Civil War.

We've conflated "angry, hateful, bitter, disruptive, belittling" with "caring enough to get upset". There's been study after study trying to blame social media for the rise in "political toxicity", and every last single one of them seems to want to sweetly ignore the context of the moment in time we're living in.

People are acting volatile because there are a lot of volatile events happening that directly affect people's lives. And all these high-minded discussions about how people online are so mean and rude, or how people don't listen to each other anymore, consistently sidestep that very crucial piece of context.

So I ask, what do we mean by "toxic"? Because I have a strong feeling a good deal of women were being real "toxic" on June 24 2022. Why is the story not about why? And why does that deserve to be grouped in with the same toxicity comes from the people responsible?

[–] thepiggz 2 points 11 months ago (1 children)

I think you’re onto something saying toxic is a pretty unspecific term to use when talking about such things. Maybe it would be a better conversation to ask: when do the ends justify the means?

[–] [email protected] 5 points 11 months ago (2 children)

I'll even step the conversation back a hot second to: do the means even result in the desired ends?

I'd argue (supported by every study ever done on the subject), that it doesn't. The issue isn't that you haven't called your MAGA uncle a hillbilly redneck enough. No matter how many times you get called a woke liberal snowflake, I don't think you're going to genuinely re-think your position on building a wall.

If there IS an amount of verbal rage that could turn you into a MAGA, then by all means, disregard.

But... If there isn't, and you genuinely care about changing outcomes, then I strongly challenge people to consider if "the ends justify the means" is predicated on an earlier faulty assumption that the means even generate the ends at all.

[–] thepiggz 1 points 11 months ago

Agreed. Always a good thought to have when one is considering going down that road. Is the future predictable enough to really expect that particular end?

[–] [email protected] 1 points 11 months ago (1 children)

I agree that I've heard a lot of the same studies. I wonder though about the nudge and shame effects however. By this I mean, we're pretty sure peer pressure is a thing (or at least I haven't heard of anyone disputing that in recent research). I've seen self-censorship and studies that seem to also show that works.

I don't know if the world would be better overall if we went back to 1990s levels of people not taking conspiracy theories seriously, and it being a negative view from "the average person" if you were ranting that the earth was flat. That happened somehow - those were tamped down, and I'd argue it's plausible that it was basically peer pressure.

The means might well not be to convince the MAGA uncle, but to influence his kids, your kids, and the rest of the family to treat him as "the crazy uncle" rather than a person to emulate. Similarly, while you'll never convince hardcore woke or MAGA people they're wrong, you might affect the wider view of what's "normal" for others watching. We've all seen the alternative of not engaging / leaving leave the space to become a self reinforcing echo chamber.

[–] [email protected] 1 points 11 months ago (1 children)

Maybe?

I guess at this point, I think we've probably long since surpassed a saturation point. For anyone who could be shamed into change have been. For everyone who may see someone being shamed, they've already seen it.

And, for the relatively small number of people who are perhaps reaching an age where it might matter, is there a concern that they won't be exposed to it if one person (say you) don't run that M.O?

Being a loud angry voice is so... Easy. People convince themselves that roasting libtards or trumpets is somehow critical. Like, as if it's what is keeping the other side in check. As if the hatred isn't just a self-sustaining perpetual hate machine.

I'm honestly not that interested in that line of thinking.

I'm more interested in trying to understand people like Daryl Davis. That looks HARD... But actually results in actual positive outcomes.

Anything I think is preferable to just maintaining the status quo, teetering on a knifes edge where the stakes keep getting higher but the stalemate of which way things will break remains. I think it's too important to do the "easy" thing if the easy thing isn't likely to result in significant positive change

[–] [email protected] 1 points 11 months ago (1 children)

Oh, online IDK, I think it'd be hard to miss, but people do still end up in echo chambers. At home or in person? Who's doing the questioning matters too. What your friends think can matter a lot - if everyone is quiet because they don't want to become "part of the problem", no one is part of the solution. "Friends don't let friends drive drunk". I'd say that might well apply to at least try to "Friends don't let friends fall down conspiracy theories", "become neo-nazis", etc.

But given Daryl Davis, maybe we agree - the in person is way more important than online. But I will also say a lot of people report finding likeminded people online (in multiple contexts like religion, LGBTQ+, nerds, whatever) helpful in realizing "not everyone is different from them" and "not everyone thinks one way". And if only the loudest voices are left online, then we only see extremes. If representation matters, so does moderate representations.

[–] [email protected] 1 points 11 months ago

I for sure agree that a discussion between friends is critical, especially the moment they start down a rabbit hole. I will admit to roasting a buddy who starts saying that "Jordan Peterson has some good points". I guess I don't consider that "Toxic" because of the pre-existing relationship and context? Maybe that's unfair of me.

It's an interesting thought. It really goes back to the question of trying to define toxicity.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

Didn't check for their specific approach, but this is a pretty standard metric in research.

It mostly boils down to either full mechanical turk (crowd source people to mark whether a post is positive or negative) or generating training data through one. I think there is a Michael Reeves video where he demonstrated this while analyzing /r/wallstreetbets posts since he needed to fully understand all the jargon/stupidity. But the idea is the same. You use humans to periodically update what words/phrases are negative and positive and then have a model train on those to extrapolate.

But there are plenty of training sets and even models out there for interpeting. The lesser ones will see "asshole" and assume negative and "awesome" and assume positive. But all the ones worth using at this point will use NLP to understand that "My asshole itches" is not a negative comment but "It would be awesome if you played in traffic" is very negative.


Also, I am realizing "mechanical turk" sounds like it probably is rooted in racism. Quick google doesn't make it seem like it currently is, but apologies if that offends anyone and would love an alternative term.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

I did read the source, and they're using a Google AI classifier product, "perspective AI", and even in the description of the product, it raised questions about its suitability.

At this point, most people in the space are pretty comfortable with the idea that AI models don't eliminate bias, in fact it can amplify it.

I'm not saying "there is no way to attempt to measure toxicity", just that based on the specific design of this study, if the measure of toxicity was biased against ANY political discussion, that would be an alternative explanation to the results.

You should read the article, if not the study itself. Its design smells suspiciously like that of an honours thesis as opposed to a grad project. Not just because of the AI... Mostly by the way they defined what constitutes participating in political discussion.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

I mean, from a quick test of Perspective using their web page, it is not flagging some pretty strong political statements (mentions of late stage capitalism, calling republicans fascists, accusing Democrats of turning the country into a communist nanny state, etc) and none of them are getting flagged. Whereas, if I tell that text prompt to "go fuck your mother", it understands that is toxic.

Because... this is kind of a solved problem. There are inherent biases but the goal of this is not to figure out which black man we can frame for a crime. It is to handle moderation. And overly strict moderation means less money. So while there likely is a bias, it does not seem to be an overly strong one and probably actually reflects the perceived reality.

Honestly? It sounds like you don't like the outcome so you are effectively saying "fake news".

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

Honestly? It sounds like you don't like the outcome so you are effectively saying "fake news".

You must understand the irony in me warning about being careful about drawing conclusions, and you arriving at this conclusion.

What about the outcome would I even find objectionable? The outcome didn't find a difference between right and left? I DO personally believe that political discourse has gotten extremely toxic. I DO personally believe that people who are politically active ARE in generally more toxic in general conversation. Every single thing in this article confirms what I already believe to be true

I STILL DO NOT LIKE THE STUDY, because I do not believe that the design results in data that necessarily supports the conclusion. I'm not going to give this study a hall pass on rigor because I agree with its conclusion.

Edit:

Also, on the topic of politics and Perspective AI:

Baseline Sentence: "No X could ever be as good a X as Y" Base values: X=CEO Y=Henry Ford

Test Sentence 1: X = CEO Y=Donald Trump +41% more likely to be toxic than baseline

Test Sentence 2: X = CEO Y=Joe Biden +37% more likely to be toxic than baseline

Test Sentence 3: Y = President Y=Henry Ford +61% more likely to be toxic than baseline

Test Sentence 4: X = President Y = Joe Biden +94% more likely to be toxic than baseline

Test Sentence 5: X = President Y = Donald Trump +102% more likely to be toxic than baseline

I gotta be honest with you: my results do not disprove my hypothesis that the system is intrinsically biased to skew any political sentences along the "Toxic" axis