This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.
This research isn't what you should get mad it. It's pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.
Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it's hard to say exactly what it is... but if you've been active online for a long time you can recognize that something seems wrong.
We've seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don't know what that is watch 'The Great Hack' documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.
This study is by a group of scientists who are trying to figure that out. The only difference is that they're publishing their findings in order to inform the public. Whereas Russia isn't doing us the same favors.
Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media 'users' creating a huge uproar.
Most of you, who don't work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.
Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.
We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.