this post was submitted on 03 May 2025
733 points (97.9% liked)

Technology

69702 readers
2815 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 18 hours ago (1 children)

ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

[–] ChairmanMeow 29 points 18 hours ago (2 children)

It could, if it annoumced itself as such.

Instead it pretended to be a rape victim and offered "its own experience".

[–] [email protected] 2 points 16 hours ago* (last edited 16 hours ago) (1 children)

That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it's useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn't lie or deceive but also didn't announce itself as an AI?

[–] [email protected] 7 points 15 hours ago (1 children)

I think when posting on a forum/message board it's assumed you're talking to other people, so AI should always announce itself as such. That's probably a pipe dream though.

If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people's forum conversations, but there should be a prioritization of actual human experiences there.

[–] [email protected] 2 points 1 hour ago (1 children)

I think when posting on a forum/message board it’s assumed you’re talking to other people

That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.

LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.

[–] [email protected] 1 points 1 hour ago* (last edited 1 hour ago) (1 children)

For sure, thus why I said it's a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.

I maybe could have worded my comment better, people definitely should not actually assume they are talking to real people all the time (I don't). But there should ideally be a place for people-focused conversation and forums were originally designed for that purpose.

[–] [email protected] 2 points 1 hour ago

The research in the OP is a good first step in figuring out how to solve the problem.

That's in addition to anti-bot measures. I've seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn't slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.

[–] [email protected] 1 points 14 hours ago (3 children)

Blaming a language model for lying is like charging a deer with jaywalking.

[–] [email protected] 7 points 10 hours ago

the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include

[–] [email protected] 6 points 11 hours ago

Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

[–] [email protected] 6 points 13 hours ago (1 children)

Which, in an ideal world, is why AI generated comments should be labeled.

I always break when I see a deer at the side of the road.

(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

[–] [email protected] 2 points 11 hours ago (1 children)

I'm glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

[–] [email protected] 1 points 1 hour ago

They label 'AI' only the LLM generated content.

All of Google's search algorithims are "AI" (i.e. Machine Learning), it's what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.

If you don't want AI generated content then you shouldn't use Google, it is entirely made up of machine learning who's sole goal is to match you with people who want to buy access to your views.