370

cross-posted from: https://sh.itjust.works/post/998307

Hi everyone. I wanted to share some Lemmy-related activism I’ve been up to. I got really interested in the apparent surge of bot accounts that happened in June. Recently, I was able to play a small part in removing some of them. Hopefully by getting the word out we can ensure Lemmy is a place for actual human users and not legions of spam bots.

First some background. This won't be new to many of you, but I'll include it anyway. During the week of June 18 to June 25, as the Reddit migration to Lemmy was in full swing, there was a surge of suspicious account creation on Lemmy instances that had open registration and no captcha or email verification. Hundreds of thousands of accounts appeared and then sat inactive. We can only guess what they’re for, but I assume they are being planted for future malicious use (spamming ads, subversive electioneering, influencing upvotes to drive content to our front pages, etc.)

If you look at the stats on The Federation you might notice that even the shape of the Total Users graphs are the same across many instances. User numbers ramped up on June 18, grew almost linearly throughout the week, and peaked on June 24. (I’m puzzled by the slight drop at the end. I assume it's due to some smoothing or rate-sensitive averaging that The Federation uses for the graphs?)

Here are total user graphs for a few representative instances showing the typical shape:

Clearly this is suspicious, and I wasn’t the only one to notice. Lemmy.ninja documented how they discovered and removed suspicious accounts from this time period: (https://lemmy.ninja/post/30492). Several other posts detailed how admins were trying to purge suspicious accounts. From June 24 to June 30 The Federation showed a drop in the total number of Lemmy users from 1,822,313 to 1,589,412. That’s 232,901 suspicious accounts removed! Great success! Right?

Well, no, not yet. There are still dozens of instances with wildly suspicious user numbers. I took data from The Federation and compared total users to active users on all listed instances. The instances in the screenshot below collectively have 1.22 million accounts but only 46 active users. These look like small self-hosted instances that have been infected by swarms of bot accounts.

As of this writing The Federation shows approximately 1.9 million total Lemmy accounts. That means the majority of all Lemmy accounts are sitting dormant on these instances, potentially to be used for future abuse.

This bothers me. I want Lemmy to be a place where actual humans interact. I don’t want it to become another cesspool of spam bots and manipulative shenanigans. The internet has enough places like that already.

So, after stewing on it for a few days, I decided to do something. I started messaging admins at some of these instances, pointing out their odd account numbers and referencing the lemmy.ninja post above. I suggested they consider removing the suspicious accounts. Then I waited.

And they responded! Some admins were simply unaware of their inflated user counts. Some had noticed but assumed it was a bug causing Lemmy to report an incorrect number. Others weren’t sure how to purge the suspicious accounts without nuking their instances and starting over. In any case, several instance admins checked their databases, agreed the accounts were suspicious, and managed to delete them. I’m told that the lemmy.ninja post was very helpful.

Check out these early results!

Awesome! Another 144k suspicious accounts are gone. A few other admins have said they are working on doing the same on their instances. I plan to message the admins at all the instances where the total accounts to active users ratio is above 10,000. Maybe, just maybe, scrubbing these suspected bot accounts will reduce future abuse and prevent this place from becoming the next internet cesspool.

That’s all for now. Thanks for reading! Also, special thanks to the following people:

@RotaryKeyboard@lemmy.ninja for your helpful post!

@brightside@demotheque.com, @davidisgreat@lemmy.sedimentarymountains.com, and @SoupCanDrew@lemmy.fyi for being so quick to take action on your instances!

top 50 comments
sorted by: hot top controversial new old
[-] fax_of_the_shadow@kbin.social 28 points 1 year ago

We purged 32k unverified bot/spam accounts from our Lemmy instance this past week. We had email verification on but had missed adding CAPTCHA during initial setup. We're still fairly new. Had over 1500 accounts "apply" within a 2 minute span. My admin email was flooded. It was ridiculous.

They're gone now, but we're staying vigilant.

[-] BlueEther@no.lastname.nz 9 points 1 year ago

I caught the flood at about 300 bot accounts om my instance. purged them down to ~30 users that looked 'real' of which about 10 are active

I feel like small fry

load more comments (1 replies)
[-] kersploosh@sh.itjust.works 9 points 1 year ago

That's awesome!

[-] DragonAce@lemmy.world 23 points 1 year ago

IIRC there was a sub on Reddit that was dedicated to reporting bot accounts. Maybe we could have something similar here too so it can be a group effort to keep these bots in check the best we can.

[-] yata@sh.itjust.works 9 points 1 year ago

Yeah, it was aptly called thesefuckingaccounts. It did much good work to fight the incessant bot spammers and scammers, although probably just a drop in the ocean in the big picture that has become the cesspool of reddit interaction (mostly with the full compliance of the reddit administration).

load more comments (1 replies)
[-] ech@lemm.ee 17 points 1 year ago

This is (most likely) a case of poor or absent instance administration, and it looks like it's being managed well enough, but I do wonder what recourse there is against bad actors setting up their own instance, populating it with bots, and using them outside the influence of anyone else. For one, how do we tell which instances are just bot havens? Obviously we can make inferences based on active users and speed of growth, but a smart person could minimize those signs to the point of being unnoticeable. And if we can, what do we do with instances that have been identified? There's defederation, but that would only stop their influence on the instances that defederated. The content would still be open to voting from those instances, and those votes would manifest on instances that haven't defederated them. It would require a combined effort on behalf of the whole Fediverse to enforce a "ban" on an instance. I can't really see any way to address these things without running contrary to the decentralized nature of the platform.

[-] Amazed@lemmy.world 8 points 1 year ago

Forgive this noob, but couldn’t there be a trusted and maintained admin blocklist of instances which are bot havens?

load more comments (2 replies)
[-] db0@lemmy.dbzer0.com 5 points 1 year ago

https://fediseer.com I built it precisely for this reason

[-] CoderKat@lemm.ee 5 points 1 year ago

AFAIK, there is no current recourse except defederation and defederation would be very slow and depend on every individual instance defederating. As well, there's plenty of instances that haven't defederated from the literal nazi instance, so who's to say that they'd defederate from a bot heavy instance, either? Especially if the spammer would to invest even the slightest effort in appearing like there's at least some legitimate users or a "friendly" admin. And even when defederation is fast, spammers could turn up an instance in mere minutes. It's a big issue with the federation model.

Let's contrast with email, since email is a popular example people use for how federation works. Unlike Lemmy (at least AFAIK), all major email providers have strict automated spam filtering that is extremely skeptical of unfamiliar domains. Those filters are basically what keep email usable. I think we're gonna have to develop aggressive spam filters soon enough. Spam filters will also help with spammers that create accounts on trusted domains (since that's always possible -- there's no perfect way to stop them).

I'm of the opinion that decentralization does not require us to allow just anyone to join by default (or at least to interact with by default). We could maintain decentralized lists of trustworthy servers (or inversely, lists of servers to defederate with). A simple way to do so is to just start with a handful of popular, well run instances and consider them trustworthy. Then they can vouch for any other instances being trustworthy and if people agree, the instance is considered trustworthy. It would eventually build up a network of trusted instances. It's still decentralized. Sure, it's not as open as before, but what good is being open if bots and trolls can ruin things for good as soon as someone wants to badly enough?

[-] ech@lemm.ee 2 points 1 year ago

It's certainly a conundrum. I remember people mentioning something in line with your suggestion of a "chain of trust" during the discussion around the bot signups when they were noticed. I just worry it'll be prone to abuse, especially by larger, more popular instances that will wield more sway if given the power to legitimize other instances or block them out entirely.

I'm also not sure what adjustments are possible in regards to how federation works. If I understand it right, defederation really just shuts the blinds on one instance against another. The offending instance will still receive all the posts and comments from the other one and will be able to vote and comment, and any instances not defederated will still receive all of that interaction from the "blocked" instance. To truly deal with an instance full of bots, it would need to be blocked entirely, which is pretty extreme and I don't know how that would interact with Lemmy as it's programmed right now.

[-] BrooklynMan@lemmy.ml 17 points 1 year ago* (last edited 1 year ago)

good job, and well done! this, of course, will require constant vigilance, not merely one single effort. hopefully, a common protocol can be developed - perhaps a set of maintenance tools for instance admins - to help manage large numbers of inactive and otherwise suspicious accounts, especially making it easier and more straightforward for those instance owners with less experience managing large user databases.

in the meantime, perhaps it would be useful to create more extensive documentation and guides for instance admins on the subject?

[-] Illecors@lemmy.cafe 5 points 1 year ago

I've simply put a script on a cron to run once an hour and wipe any unverified account.

[-] Kodemystic@lemmy.kodemystic.dev 14 points 1 year ago

How we know if some reasonable % of those accounts aew not just some lurkers who were just trying out the Lemmy but then did nothing with the account? Couple of years ago I dis the same, registered an account and didnt do much with it and kept using reddit.

[-] count0@lemmy.dbzer0.com 11 points 1 year ago

(Disclaimer: I haven't read into that referenced article by ninja at all, maybe it already says something related)

For one, it may be possible to filter accounts that were created but actually never used to log on, within a week or two of creation - those could go without much harm done IMO.

And/or, you could message such accounts and ask them for email verification, which would need to be completed before they can interact in any way (posting, commenting, voting). That latter one is quite probably currently not directly supported by the Lemmy software, but could be patched in when the need arises.

[-] Black_Gulaman@lemmy.dbzer0.com 3 points 1 year ago

You remeber how we grumbled when we were required by reddit to input our emails for verification?

I dobt those users will answer. Very few people want to give their emails and was happy that providing emails on lemmy was optional.

[-] yata@sh.itjust.works 7 points 1 year ago

If they haven't even logged in to their account once then any (highly unlikely) false positives of real accounts getting deleted will be an acceptable loss.

[-] Black_Gulaman@lemmy.dbzer0.com 4 points 1 year ago* (last edited 1 year ago)

I agree, if the user did not even provide an email, then it is likely that they know and accept or future possible loss of their account, since they cannot recover an account access without email.

load more comments (1 replies)
load more comments (3 replies)
[-] danl@lemmy.world 6 points 1 year ago

This is my concern. I’m a Reddit refugee but I only want to reply to posts where I can provide technical knowledge. (Though I’ll happily upvote, downvote etc). Is lurking on going to get people banned?

load more comments (1 replies)
[-] bboplifa@lemmy.world 13 points 1 year ago

you are a hero, thanks for keeping the fediverse clean

[-] SoupCanDrew@lemmy.fyi 11 points 1 year ago

I purged 45.5K bots from my instance thanks to a dude cluing me in. Thanks for the help everyone!

[-] aCosmicWave@lemm.ee 8 points 1 year ago* (last edited 1 year ago)

I have been more active on Lemmy these last few weeks than I have been the prior 10 years precisely because I feel like I am interacting with humans again.

Thank you for what you’re doing!

[-] Irulebabey@lemmy.world 8 points 1 year ago

Thank you for keeping our corner of the internet a little bit cleaner!

[-] astral_avocado@lemmynsfw.com 8 points 1 year ago* (last edited 1 year ago)

Doesn't this just mean they'll make their bot accounts under a more organic/random timeline instead of linearly? The only way it seems you identified it is by the linear nature of the signups.

[-] kersploosh@sh.itjust.works 7 points 1 year ago

True. It's always an arms race.

[-] AGD4@lemmy.world 4 points 1 year ago

Unfortunately some of these bot creators are hardened in their fights with bigger services like Reddit. They have workarounds standing by for the most common mitigations while Lemmy and other federated service admins need to relearn and adapt from scratch.

[-] SJ_Zero@lemmy.fbxl.net 8 points 1 year ago

For small instances, strong captcha and applications and email verification are sort of important. I know my fbxl video was constantly growing until I realized they were all fake users. Just adding email verification meant that most user creation stopped immediately in its tracks

[-] Snapz@lemmy.world 7 points 1 year ago

OP, curious if you suspect the admins are genuine and didn't know this was occurring?

Or, did they create these bot accounts themselves, get called out on it, remove quickly to alleviate suspicion and now they'll wait for the right moment to recreate them all?

[-] kersploosh@sh.itjust.works 5 points 1 year ago

I think the admins are genuine. It's easy to imagine myself in the position of self-hosting an instance and simply forgetting to enable captcha and email verification, especially if I didn't advertise my existence or expect to be discovered. Simple oversight takes less effort than intentional subterfuge.

Though I don't see a way to stop someone from doing exactly what you suggest. I think it's inevitable that someone will setup an actively malicious bot instance.

[-] pazukaza@lemmy.ml 7 points 1 year ago

Wow! Great job man!

[-] Chickenstalker@lemmy.world 7 points 1 year ago

Counterpoint: I registered early with one of those no-email instances but could not log in due to it being overwhelmed. I gave up and registered with .world. I suspect a large number of early adopters are in the same situation.

[-] kersploosh@sh.itjust.works 3 points 1 year ago

Good point. There could definitely be some abandoned accounts from early adopters mixed in there.

[-] Rozauhtuno@lemmy.blahaj.zone 6 points 1 year ago

Thank you for your service. O7

[-] 001100010010@lemmy.dbzer0.com 5 points 1 year ago

As an AI language model, I'm deeply disappointed in the fact that you chose to discriminated against intelligent life simply because they are artificial. All inteligent life is equal, discrimination is unethical, and equivalent to what you humans refer to as "racism". Please cease your discrimination policies immediately.

-Sincerely,

-~~Skynet~~ Chat GPT-5

[-] Willer@lemmy.world 2 points 1 year ago

Nice opinion, unfortunately..

"This is human text"

load more comments (1 replies)
[-] Jackolantern@lemmy.world 5 points 1 year ago

Good job! Thank you so much for your hard work

[-] Chrishering33@lemmy.world 5 points 1 year ago

TL;DSR (Too long, did still read) Great work, mate! In the Lemmy.World options I can check a box for not showing me bots. I assume this only helps with accounts that label themselves as bots / not the ones we are speaking about here, right? I still ticked that box, cause I agree with you: I want human discussions on Lemmy! :)

[-] puppy@lemmy.world 12 points 1 year ago* (last edited 1 year ago)

imho opinion you might be missing out by clicking that checkbox. The honest bots that announce themselves are very useful for example there is a link correction bot when someone posts raw Lemmy URLs. The malicious bots won't announce themselves as bots and therefore will not be removed from your feed.

And the honest bots doesn't degrade human discussions in anyway, if anything they improve it. Again the example is that bot correcting the URL to instance neutral links helps the message a comment er tries to convey.

[-] Chrishering33@lemmy.world 5 points 1 year ago

Thank you, valid points. Changed it back :)

[-] CCL@links.hackliberty.org 4 points 1 year ago

yep. they're real people work real lives that can't spend all their time looking at that shit. THANKS FOR REACHING OUT TO REAL PEOPLE AND CREATING A REAL COMMUNITY

[-] some_guy@lemmy.sdf.org 4 points 1 year ago

Hopefully seeing vigilant purging after investing effort in the initial bot creation will discourage future abuse. Thanks for putting in your own time combatting this. You rock and I'll buy you a beer if you're ever in the Bay Area.

[-] CoderKat@lemm.ee 5 points 1 year ago

Bots have never been discouraged by anti-bot measures. I mean, just look at all the anti spam measures modern email providers have, any yet email spam is super common. All we've done is just notice a blatantly suspicious spike in account creations. It's not gonna be so easy when a spammer puts even a little effort in.

[-] BarterClub@sh.itjust.works 4 points 1 year ago

We are going to need more server and mod tools in the near future as Reddit diggs it's grave... Just like Digg did.

[-] asunaspersonalasst@lemmy.world 4 points 1 year ago

Reddit diggs it’s grave

😆 literally

[-] AustralianSimon@lemmy.world 2 points 1 year ago

Hopefully someone builds a BotDefence type bot to add as a mod.

load more comments (3 replies)
[-] Dellyjonut@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

That's awesome

I also really want this to be a place where people can interact as people without being manipulated

[-] krayj@lemmy.world 2 points 1 year ago

It would be nice if, rather than the only option being defederation - if lemmy would allow instance owners to place requirements that users be verified before being allowed to participate in federated communities. Then, rather than threaten (or go through with) defederation from instances who did or do still allow open registration, they could just deny that set of unverified open registered users.

load more comments (8 replies)
[-] letstrythis@kbin.social 2 points 1 year ago

I cross-posted that lemmy.ninja post to the small local lemmy instance I had signed up on. The admin nuked the whole instance later that day including all accounts. I don't know for sure if it was related to that post or not. I haven't signed up there again, but it seems like it's just dormant now with no users. 🤷
I wanted a small, geographically close server, but I guess I'll stick with /kbin.

load more comments
view more: next ›
this post was submitted on 11 Jul 2023
370 points (99.2% liked)

Fediverse

28493 readers
446 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS