this post was submitted on 08 Jul 2023
382 points (97.8% liked)

Programming

17519 readers
412 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

/r/programming came back up two days ago and as far as I can tell everything relating to the blackout was wiped. I kinda expected it since spez was admin.

Another thing that surprised me was how much chatGPT bot spam there is (danm it is so so bad, wonder what the mods are doing over there.... ah yes, spez).

I used to sort by hot so it was hidden away a bit for me before.

Anyways I hope Lemmy does not fall into the same pitfalls!

goes back into lurk mode

top 42 comments
sorted by: hot top controversial new old
[–] RonSijm 102 points 1 year ago (2 children)

Another thing that surprised me was how much chatGPT bot spam there is

Not really a bad thing. Part of the protest was to devalue the platform...

See what /r/ProgrammerHumor/ is doing - all titles are camelCase, and all the comments started including and returning things. It's not really something anymore that reddit could sell to AI content farms.

If mods are removed for participating in the blackout, the next best thing is probably to let their sub go completely unmoderated and let things turn into a shitshow with unable content by spam bots.

Don't think you can really teach an AI bot something by letting it regurgitate it's own output

[–] lowleveldata 33 points 1 year ago* (last edited 1 year ago) (2 children)

Oh Reddit who's going to protect you from the AI invasion after you removed the mods?

[–] coloredgrayscale 22 points 1 year ago (1 children)

The solution is obviously to replace the mods by bots, to fight other bots.

Battle bots, but virtual XD

[–] [email protected] 10 points 1 year ago (1 children)

Then we got bots talking to bots about articles written by bots, moderated by bots. Its gonna be great.

[–] coloredgrayscale 4 points 1 year ago

You could call it a botnet

[–] [email protected] 1 points 1 year ago

My absolute favorite response is what /r/madlads did - they made all of their subscribers mods.

[–] [email protected] 13 points 1 year ago

Lol, imagine if they included escape keys for various languages. Then reddit would really never be able to sell the data.

[–] AdmiralShat 66 points 1 year ago

What a shit show of a website

[–] [email protected] 46 points 1 year ago (3 children)

We must prevent these kinds of bots on getting a foothold here.

I acknowledge that we do have bots here [lemmy], reposting top posts from reddit. As we grow in number. We must also scale down these bots until the day that only moderating related bots are existing in our ecosystem.

[–] [email protected] 8 points 1 year ago (1 children)

What’s the point of those bots? There’s no karma to farm on Lemmy.

[–] nous 6 points 1 year ago (2 children)

What is the point in farming karma at all?

[–] [email protected] 10 points 1 year ago (1 children)

You can sell high karma accounts to spammers.

[–] nous 4 points 1 year ago

Spammers don't want high karma accounts, they want higher valued accounts. Karma is just one (very easy to gain and view) indication of value. The lack of karma does not mean that spammers wont want to buy accounts on the Lemmy, just that the metrics they used to judge a valuable account are different and less transparent.

[–] [email protected] 2 points 1 year ago (1 children)

The accounts with high karma end up getting sold to businesses that want to use them to advertise (but make it look like grass roots support).

[–] nous 2 points 1 year ago

They want high valued accounts, karma is just one measure of that. Removing the karma from accounts does not remove the value of those accounts. Just changes what metrics are used to judge value. So there is still an incentive to create bots that try to create valued accounts even if those accounts are not actually creating valued content. The only question is what will businesses see as a valued account.

Though I do think removing karma is a positive as it forcing them to work a bit harder.

[–] [email protected] 7 points 1 year ago (2 children)

As i argued in another comment, there are many useful bots for certain niche communities that I really think have a place here, even though I am generally wary of AI accounts infesting the fediverse as well.

Good examples for good and very useful, yet not mod work related bots are on TCG/CCG subs like magic the gathering and hearthstone to provide context to card names, or convert deck codes into a nicely formatted table of the used cards. Or on the Lego sub, returning any set number as a link to the proper bricklink entry. This kind of bot should be allowed and even encouraged to be used where appropriate.

Then there are the plenty of irrelevant and annoying bots we really can do without, like the alphabetical order bot, haiku bot, the dozens of bots quoting LOTR or Star Wars characters, and so on. Like most reddit jokes they stopped being funny fairly quickly and now add nothing to the conversation, but are being kept around for karma.

And then there are the more insidious bots that are about to become widespread, being harder to detect the more their refinement advances. It is going to be a constant arms race between bot detection and bot deception skills.

[–] [email protected] 5 points 1 year ago

There are some bots that are useful for everyone (community specific ones mostly), those I have no qualms with as they help everyone in that community.

The ones I abhor are the spam bots ones, different accounts giving variations of the same messages, possibly to farm karma or inflate activity numbers (I wouldn't rule anything out when it comes to spez making his darling look active).
I also hate down vote bots as I feel they don't contribute to anything.

[–] [email protected] 4 points 1 year ago

Good examples for good and very useful, yet not mod work related bots are on TCG/CCG subs like magic the gathering and hearthstone to provide context to card names, or convert deck codes into a nicely formatted table of the used cards. Or on the Lego sub, returning any set number as a link to the proper bricklink entry.

Yes, thank you for that. I guess I used the wrong term, I should have said "Service Bots" those bots who provide useful service for the community.

and yes, entertainment and joke bots are tiring. (they can exist but, can we apply a limit on their frequency? let's say an entertainment bot can only post a maximum of 5 posts per week for a small instance and 50 posts per week for a significantly larger instance. That way it would still remain novel and it's like a lottery where people are looking forward to its next appearance.

And then there are the more insidious bots that are about to become widespread, being harder to detect the more their refinement advances. It is going to be a constant arms race between bot detection and bot deception skills.

this is the hardest part, as the bot farms typically have the advantage of first strike. If we are not careful, we would be left behind as being on defense puts us in the position of being a reactionary player in this game of whack-a-bot.

[–] [email protected] 0 points 1 year ago

There already is some ChatGPT bot and I see people bringing it into threads sometimes. I downvote almost every person who does so, as I've yet to see a single case where it was actually asked for or meaningfully contributed.

I want more communities to have rules against unsolicited AI comments and for them to better enforce them (one of the cases I'm referring to was in a community that already had a rule against AI comments, but the comment had still been up for a while and had been upvoted).

[–] [email protected] 34 points 1 year ago (3 children)

Anyways I hope Lemmy does not fall into the same pitfalls!

I really hope! Just watchout for Meta

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

I think the majority in the fediverse would just move to an instance that defederated meta, at least I know I would and I have a feeling that I am a typical fediverse user

[–] philm 6 points 1 year ago (1 children)

Yeah I have the feeling that sign-up should probably default to be manually moderated, to avoid a bot-swarm taking over accounts (and well probably a lot of bot instances need to be blacklisted then as well).

I'm not sure how dirty the game of big social media is/will be, but if they really feel threatened, they may start something like that (might make sense to be legally secured in that case...).

[–] [email protected] 1 points 1 year ago (1 children)

that is pretty labor intensive, I wonder how many of us would want to pitch in, or if the server software even allows delegating that responsibility to non admins. I know for sure that I dont have time to mod lemmy as much as I want to see it succeed after abandoning r

[–] philm 3 points 1 year ago

I think at the time where this will be relevant, it will be implemented (I don't think it's that difficult), I don't think it will be that difficult. Apart from that a lot of the instances already have manual sign up, and it's working well so far AFAIK. (The beauty of decentralization is, that this work will be distributed among all the different instances, whereas the number of instances is ideally proportionally growing like the userbase). But yeah ideally it wouldn't be necessary and some kind of smart algorithm (AI? captcha?) will decide whether the user is allowed to register (as it is currently with captchas)... But we'll see...

[–] Feyter -1 points 1 year ago (1 children)

What should meta possible change about Lemmy or programming.dev? They have no power here even if we federate with them.

[–] [email protected] 10 points 1 year ago (1 children)

They have no power here even if we federate with them.

The current matter with Meta is that they have bad intentions towards the fediverse

https://infosec.pub/post/400702

And even if you don't have Threads app installed, Meta is also a privacy threat to fediverse users. If there are fediverse instances that are still federated with Meta.

Ross Schulman, senior fellow for decentralization at digital rights nonprofit the Electronic Frontier Foundation, notes that if Threads emerges as a massive player in the fediverse, there could be concerns about what he calls “social graph slurping." Meta will know who all of its users interact with and follow within Threads, and it will also be able to see who its users follow in the broader fediverse. And if Threads builds up anywhere near the reach of other Meta platforms, just this little slice of life would give the company a fairly expansive view of interactions beyond its borders.

https://www.wired.com/story/meta-threads-privacy-decentralization/

[–] philm 1 points 1 year ago

Yeah that's also my worry. That Meta (is it already legally allowed to use that name anyway...?) will try to grab data and analyze it for free kind of (without the potential ad revenue, but well at least free data...). With AI it can likely easily pinpoint/target each user and create a profile or something, maybe even link it with people on their platforms I guess...

Anyway they could still just use the API I guess, they just can't easily subscribe to the Activity streams via their official instance (but of course they could spin up an instance that just crawls and subscribes to every instance).

I'm really interested what their intention is exactly, but it's for sure not good...

[–] [email protected] 28 points 1 year ago (1 children)

Ah yes, spez. I got permabanned the day 3p died for harrasment, clicked the link and it was a comment I had made a week prior insulting spez.

He really leans into the man baby version of Elon

[–] [email protected] 6 points 1 year ago

Yeah he's kinda a wannabe Elon, but I think he hasn't really found a fan base as Elon has, I'm not sure anyone really likes him lol

[–] [email protected] 23 points 1 year ago (1 children)

You should quit giving Reddit your views. They will use those views to sell its IPO.

[–] [email protected] 22 points 1 year ago

As a robosexual, I only approve of Lucy Liu bots

[–] [email protected] 19 points 1 year ago

No need for lurk mode here friend

[–] [email protected] 18 points 1 year ago (2 children)

how much chatGPT bot spam there is

It doesn't surprise me at all. The spam was already there on /r/programming and /r/coding way before the blackout. I tried to report all the posts, I asked to become a mod to clean all this shit (and was rejected), but nothing worked. They don't want to clean the mess, and that's another reason why I don't care if reddit dies.

As for /r/learnprogramming, it's still filled with spam or people who cannot do a proper google query, it's as hopeless as the rest. I'm unhappy for all the newbies who want to learn something. I hope the "learnprogramming" of lemmy will be more successful.

[–] [email protected] 7 points 1 year ago (1 children)

To be fair, new programmers generally don’t know enough to construct a proper Google query either. And yes there are some lazy people who just don’t try. But sometimes you know what you want to achieve but any query you try seems to be unhelpful. For example, if I want to learn how to store settings in c++ the first link for me tells me to use boost. Now I need to learn about linking libraries and 300 other boost-isms. While anyone with any basic knowledge could recommend reading strings line by line and splitting the string on the equal sign.

[–] Deely 1 points 1 year ago

Judging by quality of Google search results I believe experienced devs have the same problems with Google as well..

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

I had some lengthy period of time where I enjoyed regularly helping folks in r/learnprogramming. But it got exhausting fast. For every person putting in a good attempt at learning, there was 10 people who couldn't do the most basic level of googling and content was often extremely repetitive as a result.

The sub also faced a constant stream of people who just wanted to self advertise their own YouTube videos for teaching programming, as if the lack of such was the barrier to learning.

Oh, and soooo many people who clearly just wanted to be told the answer to their homework questions and weren't even hiding that.

[–] [email protected] 12 points 1 year ago (1 children)

I posted a question on one of theses subs a few weeks ago and had mostly very generic answer that clearly didn't read all the post. I was confused at the time but it makes sense now, it was the same kind of basic trooblesooting steps by chatGPT. Reddit is doomed, there are way too much bots. We can only hope to find a solution before it spreads to the whole internet.

[–] thatsPrettyNeat 3 points 1 year ago

We'll have to prove we're a human every time and log in every time we use a service to be sure it doesn't have bots. But AI will get better and better at those too. Is captcha the next level above Go? Lol

[–] [email protected] 1 points 1 year ago

Testing to see if I can psot here (languages seem to be messing some things up)