130
submitted 1 year ago by 1337tux@lemmy.world to c/fediverse@lemmy.ml

Lemmy has multiplied it's number of users (maybe more accurately accounts) in just few days. How much do you think is the percentage of bot accounts? Is Lemmy having problem with bot farming?

you are viewing a single comment's thread
view the rest of the comments
[-] Very_Bad_Janet@kbin.social 10 points 1 year ago* (last edited 1 year ago)

Have all of the Lemmy instances (and kbin ones, too) now added email requirements, captcha, and maybe the little paragraph asking why you should have an account that Beehaw does?

Also, how do you identify bot accounts? Can you bulk ban accounts or.do they all have to be examined and dealt with individually?

ETA: I wasn't suggesting the paragraph. Just wondering what the instances are putting in to prevent bots. I actually tried to sign up for Beehaw, wrote my little paragraph, and then got the pinwheel of death, lol. I was never able to sign up, but lucked out with a kbin.social account. I have to add that it's pretty disappointing to be downvoted for simply asking a question. Feels like what I left at Reddit.

[-] funkyb@kbin.social 4 points 1 year ago

good grief i hope not. Email & captcha are reasonable; a short form essay on why you should be graced with the ability to participate is super cringe.

[-] rm_dash_r_star@lemm.ee 4 points 1 year ago

Yeah I was a bit weirded out by that, it's like what, am I joining a cult? Anyway I actually signed up on a number of instances in search of one I like and only a couple were using an application. The rest were just captcha plus email.

I think they should come up with a better mechanism than an application. I understand the need to verify a signer is actually a human being, but an application is pretty off-putting. Problem is there's bots that can get around captcha and email authentication, AI keeps getting smarter.

[-] Amir@lemmy.ml 4 points 1 year ago

"ChatGPT, write me a paragraph about why I want to join an internet forum in first person"

[-] rm_dash_r_star@lemm.ee 2 points 1 year ago

Yeah ChatGPT could fill out an application as well. In fact AI is getting to the point now where it would be hard to tell even by voice. Though it's also a matter of effort on the part of the exploiter. They don't have to make it zero occurrence, just enough to keep it at bay.

[-] Sal@mander.xyz 0 points 1 year ago* (last edited 1 year ago)

It may be an AI, or it can also be a real human that is lying. The point of the application filter is to significantly slow down these approaches to bring their impact to a more manageable level. An automated AI bot will not be able to perform much better than a human troll with some free time because any anomalous registration patterns, including registration spikes and periodicity, are likely to be detected by the much more powerful processor that resides in the admin's head.

On the other hand, a catch-all domain e-mail, a VPN with a variable IP, and a captcha-defeating bot can be used to generate thousands of accounts in a very short amount of time. Without the application filter the instance is vulnerable to these high-throughput attacks, and the damage can be difficult to fix.

load more comments (4 replies)
load more comments (4 replies)
this post was submitted on 22 Jun 2023
130 points (97.1% liked)

Fediverse

17665 readers
2 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS