43
submitted 8 months ago by tux0r@feddit.de to c/security@lemmy.ml
  • I am denied read-only access to some websites because I use a VPN. This makes no sense at all, but it happens anyway.
  • I am not allowed to register in some forums because I use a VPN. Because everyone knows that anyone who uses a VPN is a serious criminal. There is no other option.
  • I am subsequently banned from forums because the moderators realise that my IP address is not unique because I use a VPN. My posts don't matter at all, IP addresses obviously unambiguously identify every person on this planet.
  • I'm supposed to confirm that I'm not a robot because I use a VPN. The fact that the company asking for these confirmations (usually Google) is itself sending robots marauding through the internet doesn't matter, because Google is Google and I'm just a bloke with a VPN.

Guys, a VPN is self-defence. A website banning VPNs is like a brothel banning condoms. I mean, of course the house rules apply, but I'd like to see a bit more judgement. What's happening right now is ridiculous and hardly does justice to the security aspect of these "tests". If you find yourself as a contributor to this list, I urge you to stop. I am not a bad guy. All I do is use a VPN.

Thank you.

top 7 comments
sorted by: hot top controversial new old
[-] doublejay1999@lemmy.world 10 points 8 months ago

Websites have no interest in banning VPNs and excluding visitors. The fact is that they are a conduit for spam, bots and more rarely hacking and so hosts will protect themselves. Self defence.

[-] tux0r@feddit.de 3 points 8 months ago

How does it defend a website to deny reading access to static content?

[-] Rossphorus@lemmy.world 9 points 8 months ago

Topical answer: Bots going around scraping content to feed into some LLM dataset without consent. If the website is anything like Reddit they'll be trying to monetise bot access to their content without affecting regular users.

[-] tux0r@feddit.de -2 points 8 months ago

It should be easy to distinguish a bot from a real user though, isn't it?

[-] damnthefilibuster@lemmy.world 9 points 8 months ago

Nope. It gets difficult every single day. Used to be easy - just check the user agent string. Real users will have a long one that talks about what browser they’re using. Bots won’t have it or will have one that mentions the underlying scraping library they’re using.

But then bot makers wizened up. Now they just copy the latest browser agent string.

Used to be that you could use mouse cursor movement to create heat maps and figure out if it’s a real user. Then some smart Alec went and created a basic script to copy his cursor movement and broke that.

Oh, and then someone created a machine learning model to learn that behavior too and broke that even more.

[-] tux0r@feddit.de 2 points 8 months ago

Good point, thank you. Uh... beep!

[-] Rossphorus@lemmy.world 3 points 8 months ago

Unfortunately not. The major difference between an honest bot and a regular user is a single text string (the user agent). There's no reason that bots have to be honest though and anyone can modify their user agent. You can go further and use something like Selenium to make your bot appear even more like a regular user including random human-like mouse movements. There are also a plethora of tools to fool captchas now too. It's getting harder by the day to differentiate.

this post was submitted on 23 Feb 2024
43 points (95.7% liked)

Security

5010 readers
1 users here now

Confidentiality Integrity Availability

founded 4 years ago
MODERATORS