39
submitted 5 months ago by neo@hexbear.net to c/technology@hexbear.net

Consider https://arstechnica.com/robots.txt or https://www.nytimes.com/robots.txt and how they block all the stupid AI models from being able to scrape for free.

you are viewing a single comment's thread
view the rest of the comments
[-] CarbonScored@hexbear.net 3 points 5 months ago* (last edited 5 months ago)

It's not about relying on it, it's about changing the behaviour of web crawlers that respect 'em, which, as someone who has adminned a couple scarily popular sites over the years, is a surprisingly high percentage of them.

If someone wants to get around it, they obviously can, but this is true of basically all protective measures ever. Doesn't make them pointless.

this post was submitted on 29 May 2024
39 points (100.0% liked)

technology

23313 readers
106 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS