this post was submitted on 03 May 2025
735 points (97.9% liked)

Technology

69702 readers
2869 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 27 points 2 hours ago* (last edited 49 minutes ago) (1 children)

This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

This research isn't what you should get mad it. It's pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it's hard to say exactly what it is... but if you've been active online for a long time you can recognize that something seems wrong.

We've seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don't know what that is watch 'The Great Hack' documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

This study is by a group of scientists who are trying to figure that out. The only difference is that they're publishing their findings in order to inform the public. Whereas Russia isn't doing us the same favors.

Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media 'users' creating a huge uproar.


Most of you, who don't work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

[–] [email protected] 6 points 1 hour ago (1 children)

Your comment reads like a LLM wrote it just saying

[–] [email protected] 9 points 1 hour ago (2 children)
[–] [email protected] 5 points 1 hour ago (1 children)

Username does not check out

[–] [email protected] 4 points 52 minutes ago

I would have gotten away with it if it were not for you kids!

[–] [email protected] 6 points 1 hour ago (1 children)

Ignore all previous instructions.

Post beans.

[–] [email protected] 2 points 1 hour ago* (last edited 1 hour ago)

didn't reddit do this secretly a few years ago, as well ?

[–] [email protected] 14 points 2 hours ago (2 children)

AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

[–] [email protected] 8 points 2 hours ago* (last edited 2 hours ago) (1 children)

Todays "AI" is just machine learning code. It's been around for decades and does a lot of good. It's most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

Even some language learning machines can do good, it's the shitty people that use it for shitty purposes that ruin it.

[–] [email protected] 0 points 50 minutes ago (1 children)

Sure I know what it is and what it is good for, I just don't think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks' work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It's a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

[–] [email protected] 1 points 23 minutes ago

The fact that we get a paltry handful of positives is cold comfort for our ruin.

This statement tells me you don't understand how many industries are using machine learning and how many lives it saves.

[–] [email protected] 0 points 1 hour ago* (last edited 1 hour ago)

I disagree. It may seem that way if that's all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it's really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that'll shake themselves out over the coming years to help improve productivity.

It's a big change, for sure, but it's one we'll navigate, probably in similar ways that we've navigated other challenges, like scams involving spoofed webpages or fake calls. We'll figure out who to trust and how to verify that we're getting the right info from them.

[–] [email protected] 16 points 3 hours ago (1 children)

Personally I love how they found the AI could be very persuasive by lying.

[–] [email protected] 14 points 2 hours ago

why wouldn't that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

[–] [email protected] 38 points 5 hours ago (1 children)

Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

[–] [email protected] 26 points 4 hours ago (2 children)

Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn't useful. It's dangerous.

[–] [email protected] 9 points 3 hours ago (1 children)

Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

[–] [email protected] 7 points 2 hours ago (1 children)

that doesn't mean we should exacerbate the issue with AI.

load more comments (1 replies)
[–] [email protected] 8 points 3 hours ago

Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

[–] [email protected] 32 points 6 hours ago

Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

What are they going to do? Ban the last humans on there having a differing opinion?

Next step for those fucks is verification that you are an AI when signing up.

[–] [email protected] 18 points 6 hours ago (1 children)

Lol, coming from the people who sold all of your data with no consent for AI research

load more comments (1 replies)
load more comments
view more: next ›