this post was submitted on 03 Sep 2024
34 points (100.0% liked)

Technology

37724 readers
480 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

TikTok and other social media companies use AI tools to remove the vast majority of harmful content and to flag other content for review by human moderators, regardless of the number of views they have had. But the AI tools cannot identify everything.

Andrew Kaung says that during the time he worked at TikTok, all videos that were not removed or flagged to human moderators by AI - or reported by other users to moderators - would only then be reviewed again manually if they reached a certain threshold.

He says at one point this was set to 10,000 views or more. He feared this meant some younger users were being exposed to harmful videos. Most major social media companies allow people aged 13 or above to sign up.

TikTok says 99% of content it removes for violating its rules is taken down by AI or human moderators before it reaches 10,000 views. It also says it undertakes proactive investigations on videos with fewer than this number of views.

When he worked at Meta between 2019 and December 2020, Andrew Kaung says there was a different problem. [...] While the majority of videos were removed or flagged to moderators by AI tools, the site relied on users to report other videos once they had already seen them.

He says he raised concerns while at both companies, but was met mainly with inaction because, he says, of fears about the amount of work involved or the cost. He says subsequently some improvements were made at TikTok and Meta, but he says younger users, such as Cai, were left at risk in the meantime.

top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 23 points 2 months ago* (last edited 2 months ago) (1 children)

It gets worse, when you remember that there's no dividing line between harmful and healthy content. Some content is always harmful, some is by default healthy, but there's a huge gradient of content that needs to be consumed in small amounts - not doing it leads to alienation, and doing it too much leads to a cruel worldview.

This is doubly true when dealing with kids and adolescents. They need to know about the world, and that includes the nasty bits; but their worldviews are so malleable that, if all you show them is nasty bits, they normalise it inside their heads.

It's all about temperance. And yet temperance is exactly the opposite of what those self-reinforcing algorithms do. If you engage too much with content showing nasty shit, the algo won't show you cats being derps to "balance things out". No, it'll show you even more nasty shit.

It gets worse due to profiling, mentioned in the text. Splitting people into groups to dictate what they're supposed to see leads to the creation of extremism.


In the light of the above, I think that both Kaung and Cai are missing the point.

Kaung believes that children+teens would be better if they stopped using smartphones; sorry but that's stupid, it's proposing to throw the baby out with the dirty bathtub water.

Cai on the other hand is proposing nothing but a band-aid. We don't need companies to listen to teens to decide what we should be seeing; we need them to stop altogether deciding what teens and everyone else should be seeing.

Ah, and about porn, mentioned on the text: porn is at best a small example of a bigger issue, if not a red herring distracting people from the issue altogether.

[–] [email protected] 5 points 2 months ago (1 children)

It's nice to see that others get it. Unfortunately, neither of us have any immediate influence on the largest social media platforms.

[–] [email protected] 6 points 2 months ago

To make it worse decision makers - regardless of country - are typically old and clueless about "this computer stuff". As such they literally don't see the problem.

[–] [email protected] 9 points 2 months ago* (last edited 2 months ago)

teenage boys were being shown posts featuring violence and pornography

That is so unimaginable, no teenage boy would even think of seeking such filth out. Surely this Al Gore guy must be the underlying cause.

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago)

"he was met mainly with inaction because, he says, of fears about the amount of work involved or the cost"

No kidding, engagement drives their whole business model. And nothing engages and addicts people more than violent, hateful shit.

And its not just young men (though they are more heavily targeted) the algorithm can hijack almost any viewing pattern and steer it in a violent, xenophobic direction in a remarkably short time.

They arent going to change the algorithm unless serious action is taken by government regulation, or something (which is not a promising thought)