This should be probably pinned.
Lemmy
Everything about Lemmy; bugs, gripes, praises, and advocacy.
For discussion about the lemmy.ml instance, go to [email protected].
Agree
Here we go: https://overseer.dbzer0.com/
API doc: https://overseer.dbzer0.com/api/
curl -X 'GET' \
'https://overseer.dbzer0.com/api/v1/instances' \
-H 'accept: application/json'
Will spit out suspicious instances based on fediverse.observer . You can adjust the threshold to your own preference.
Nice! Would be cool if you could also include current statuses of captchas, emails, and application requirements.
Tell me how to fetch them and it will. ;)
I think the easiest option is to just iterate through the list of suspicious instances, and then check {instance_url}/api/v3/site
for each of them. Relevant keys of the response json are site_view.local_site.captcha_enabled
, site_view.local_site.registration_mode
, and site_view.local_site.require_email_verification
.
Since it's a bunch of separate requests, probably it makes sense to do these in parallel and probably also to cache the results at least for a while.
It occurs to me that this kind of thing is better left to observer, as it's set up to poll instances and gather data. I would suggest you ask them to ingest and expose this data as well
CAPTCHA is the bare minimum. Who the hell turns it off?
There is an argument to be made that captchas can be automatically bypassed with some effort.
OTOH, the current wave of bots is quite clearly favoring instances with captcha disabled, so clearly it's acting as at least a small deterrent.
Sometimes, security just means not being the low-hanging fruit.
Doing no captcha is like leaving the door open, hoping no-one breaks in, instead of at least closing the door (a closed door decreases chance of break in by near 100%, even if it's not locked)
99% of fedi instances should require sign-ups with applications and email. It does not make sense to let in users indiscriminately unless you have a 24h staff in charge of moderation.
We're trying to capture the reddit refugees as well. It's a fine-line to walk.
Agreed. An application that must be human reviewed is a very large gate that many people will see and just close the site. Myself included.
Email + Captcha should be doable right?
yes, that's the bare minimum until we get better toolset
Email verification + captcha should be enough. The application part is cringe and a bad idea, unless you really want to be your own small high school clique and don't have any growth ambitions, which is perfectly fine but again should not be expected from general instances looking to welcome Redditors.
Thanks for the heads up, StarTrek.website has enabled CAPTCHA and purged the bots from our database.
Starfleet takes changeling infiltrations seriously :P
This might be related but I've noticed that someone is [likely automatically] following my posts and downvoting them. Kind of funny in a 'verse without karma.
Karma may mean nothing but the information space is a strategic domain.
I don't think it's the case here, as I've noticed this after posts in small communities:
- c/linguistics (~240 members)
- c/parana (1 member - new comm)
I think that the person/bot/whatever is following specific people.
Sounds like a spez sponsored attack on Lemmy.
Or just the unavoidable spam bot accounts coming as long as it's easy and the instance operators being still unprepared.
I highly doubt spez did this. Reddit is currently doing fine. Even if it all goes away he's sitting on over a decade of genuine human conversations he can sell to AI companies and make millions. He isn't worried.
Steve Huffman doesn't do anything, he's a greedy little pigboy who profits off of the creation of his dead "friend". He claims ownership of your ideas, for reddit's exclusive profit, at no benefit (if anything, at penalty) to yourself.
However it would be naive to assume that he hasn't directed at least some shade towards reddit. Almost as naive as to think that Google doesn't create bots to target websites that don't use their own captcha services.
PSA: When "proving you're human", always try to poison the data. They're using your input to train visual AI, without paying you for your efforts. With Google, they will typically put the training up front - there will be one or two images that a bot isn't sure about. If you give the unexpected response, the next test will be one that the machine knows, to check that you're a human who knows what they're talking about. With hcaptcha or some others, they might put the obvious one first, then check your guesses are human after.
The services will determine that you're human by other means anyway (eg mouse movements) and eventually let you through, but by giving them the wrong answer when they don't know but the right answer when they do, you can make their AI less effective.
They should be paying you for your input into their commercial enterprise, so fuck them.
It was brought to my attention that my instance was hit with the spam bots regs. I've disabled registration and deleted the accounts from the DB. is there anything else I can do to clear the user stats on the sidebar?
You can do this by updating site_aggregates.users
in your database (WHERE site_id = 1
)
I'm noobish, but could they be defederated until they get their act together before they spam everybody?
Yes, and I believe some instances are already doing this
I'm sure it's different per instance, but is there any discussion on what is being done with the collected emails?
I understand the need to fight bots and spam, but there are also those of us who don't want to associate emails with accounts so some privacy-related way of handling this would be appreciated.
there's plenty of services that provide one-use emails or disposable ones
True, I use one myself.
That's a cool instance you're running over there, by the way! I appreciate it.
Any tips on how to get rid of all the spam accounts? I have been affected by this as well and thankfully captcha stopped them, but about 100 bots signed up before I could stop.
Normally i'd just look through all the accounts and pick out the 4 or so users that are real. But there is no apparent way to view every user account as an admin.
Edit: There is a relevant issue open on the lemmy-ui repo, for those interested: https://github.com/LemmyNet/lemmy-ui/issues/456
Did you figure out how to clean it up? You can see a list of users in your local_user
table.
Fun fact, they're removing Captcha in the next release.
I won't be upgrading and I anticipate I'll be defederating with any instance that upgrades to v0.18.
Looks like my instance got hit with a bot. I had email verification enabled but had missed turning on captcha. The bot used fake emails so none of the accounts are verified, but still goes towards account numbers. Is there really any good way to clean this up? Need a way to purge unverified accounts or something.
How comfortable are you with SQL? You can see all unused verifications in the email_verification
table. You should be able to just delete those users from local_user
, and then update your user count with the new count of the local_user
table in site_aggregates.user
(where site_id = 1
)
Thank you for proactively contacting me regarding this @[email protected]. I've had this issue on my https://feddi.no instance, but I have added a captcha and registration applications now. Hopefully it will alleviate some of the problem.
All of the bots accounts seems to have a number in their email so I manually looked through the list of users in email_verification
that contained numbers in the email to look for false positives:
select * from email_verification where email ~ '[0-9]+';
before running
delete from local_user where id in (select local_user_id from email_verification);
to delete the users.
By suggestion from @[email protected] I updated site_aggregates
to reflect the new users count on the instance:
UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;
.
I know from talking to admins when pbpBB was really popular that fighting spammers and unsavory bots was the big workload in running a forum. I'd expect the same for Fediverse instances. I hope a system can be worked out to make it manageable.
As a user I don't have a big problem with mechanisms like applications for the sake of spam control. It's hugely more convenient when an account can be created instantaneously, but I understand the need.
I do wonder how the fediverse is going to deal with self-hosting bad actors. I would think some kind of vetting process for federation would need to exist. I suppose you could rely on each admin to deal with that locally, but that does not sound like an efficient or particularly effective solution.
Today, a bunch of new instances appeared in the top of the user count list. It appears that these instances are all being bombarded by bot sign-ups.
Yup, I noticed this as well.
Hopefully the mods of the instances will notice this and remove these accounts quickly! Despite this, I think the mods of all instances, and of all communities, had better brace themselves for incoming spam and hate speech.
Maybe this is what's implied or I'm just being silly; What is to stop a bad actor spinning up a Lemmy instance, creating a bunch of bot accounts with no restrictions, and spamming other instances? Would the only route of action be for the non spam instances to individually defederate the spam ones? Seems like that would be a bit of a cat and mouse situation. I'm not too familiar with the inner workings and tools that Lemmy has that would be useful in this situation
They can do this, and it is cat and mouse. But...
- It generally costs money to stand up an instance. It often requires a credit-card, which reduces anonymity. This will dissuade many folks.
- A malicious instance can be defederated, so it might not be all that useful.
- People can contact the security team at the host providing infra/internet to the spammer. Reputable hosts will kill the account of a spammer, which again is harder to duplicate if the host requires payment and identity info.
- Malicious hosts that fail to address repeated abuse reports can be ip-blocked.
- Eventually Lemmy features can be built to protect against this kind of thing by delaying federation, requiring admin approval, or shadow -banning them during a trial period.
Email has shown us that there's a playbook that kind of works here, but it's not easy or pleasant.
One thing I like about lemmy was having to put in an application and waiting for approval. I knew I was vetted and others here were too.
Figure that alone could keep out most of the trolls and definitely the bots.