1080
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Jul 2024
1080 points (98.4% liked)
Technology
59598 readers
3061 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
The objective of reCAPTCHA (or any captcha) isn't to detect bots. It is more of stopping automated requests and rate limiting. The captcha is 'defeated' if the time complexity to solve it, whether human or bot, is less than what expected. Now humans are very slow, hence they can't beat them anyway.
There are much better ways of rate limiting that don't steal labor from people.
hCaptcha, Microsoft CAPTCHA all do the same. Can you give example of some that can't easily be overcome just by better compute hardware?
The problem is the unethical use of software that does not do what it claims and instead uses end users for free labor. The solution is not to use it. For rate limiting a proxy/load-balancer like HAProxy will accomplish the task easily. Ex:
https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting
And what will you do if a person in a CGNAT is DoSing/scraping your site while you want others to access? IP based limiting isn't very useful, both ways.
HAProxy also has stick tables, pretty beefy ACLs, Lua support, and support for calling external programs. With the first two one can do pretty decent, IP, behavior, and header based throttling, blocking or tarpitting. Add in Lua and external program support and you can do some pretty advanced and high-performance bot detection in your language of choice. All in the FOSS version, which also includes active backend health checks.
It's really a pretty awesome LB/Proxy.
which is bots. bots do automated requests and every automated request doer can also be called a bot (i.e. web crawlers are called bots too and -if kind- also respect robots.txt which has "bots" in its name for this very reason and bots is the shortcut for robots) use of different words does not change reality behind it, but may add a fact of someone trying something on the other.
There isn't a good way to classify human users with scripts without adding too much friction to normal use. Also bots are sometimes welcome amd useful, it's a problem when someone tries to mine data in large volume or effectively DoS the server.
Forget bots, there exist centers in India and other countries where you can employ humans to do 'automated things' (youtube like count, watch hour for example) at the same expense of bots. There are similar CAPTCHA services too. Good luck with those :)
Only rate limiting is the effective option.
i doubt that. you could maybe ratelimit per IP and the abusers will change their IP whenever needed. if you ratelimit the whole service over all users in the world, then your service dies as quickly into uselessness as effective your ratelimiter is. if you ratelimit actions of logged in users, then your ratelimiting is limited by your ability to identify fake or duplicate accounts, where captchas are not helpful at all.
i was answering about that wording (that captchas were "not" about bots but about "stopping automated requests") and that automated requests "are" bots instead.
call centers are neither bots nor automated requests (the opposite IS their advantage) and thus have no relation to what i was specifically saying in reply to that post that suggested automated requests and bots would be different things in this context.
i wasn't talking about effectiveness of captchas either or if bots should be banned or not, only about bots beeing automated requests (and vice versa) from the perspective of the platform stopping bots. and that trying to use different words for things, (claiming like "X isn't X, it is really U!"* or automated requests aren't bots) does not change the reality of the thing itself.
*) unrelated to any (a-)social media platform
yeah my bad. I meant too many automated requests. Both humans and bot generate spams and the issue is high influx of it. Legitimate users also use bots and by no means it's harmful. That way you do not encounter captcha everytime you visit any google page, nor a couple of scraping scripts gets a problem. Recaptcha (or hcaptcha, say) triggers when there is high volume of request coming from same ip. Instead of blocking everyone out to protect their servers, they might allow slower requests so legitimate users face mininimal hindrance.
Most google services nowadays require accounts with stronger (like cell phone) verification so automated spam isn't a big deal.
since bots are better at solving captchas and humanoid services exist that solve them, the only ones negatively affected by captchas are regular legitimate users. the bad guys use bots or services and are done. regular users have to endure while no security is added, and for the influx i guess it is much more like with the better lock on the front door: if your lock is a bit better than that of your neigbhour, theirs might be force-opened more likely than yours. it might help you, but its not a real but only relative and also very subjective feeling of 'security".
beeing slower than the wolves also isn't as bad as long as you are not the slowest in your group (some people say)... so doing a bit more than others always is a good choice (just better don't put that bar too low like using crowdsnakeoil for anything)
put in other words, common users can't easily become 'bad guy' ie cost of attack is higher hence lower number of script kiddies and automated attacks. You want to reduce number. These protections are nothing for bitnet owners or other high profile bad actors.
ps: recaptcha (or captcha in general) isn't a security feature. At most it can be a safety feature.
o,,O
I thought captcha's worked in a way where they provided some known good examples, some known bad examples, and a few examples which aren't certain yet. Then the model is trained depending on whether the user selects the uncertain examples.
Also it's very evident what's being trained. First it was obscured words for OCR, then Google Maps screenshots for detecting things, now you see them with clearly machine-generated images.