589
submitted 2 months ago* (last edited 2 months ago) by Tea@programming.dev to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] krigo666@lemmy.world 113 points 2 months ago

Laws should be passed in all countries that AI crawlers should request permission before crawling whatever target site. I haver no pity to AI "thiefs" that get their models poisoned. F...ing plague, wasn't enough the adware and spyware...

[-] catloaf@lemm.ee 19 points 2 months ago

An HTTP request is a request. Servers are free to rate limit or deny access

[-] FaceDeer@fedia.io 18 points 2 months ago

And Wikimedia, in particular, is all about publishing data under open licenses. They want the data to be downloaded and used by others. That's what it's for.

[-] LostXOR@fedia.io 4 points 2 months ago

Even so I think it would be totally reasonable for them to block web scrapers, as they provide better ways to download all their data.

[-] FaceDeer@fedia.io 7 points 2 months ago

At the root of this comment chain is a proposal to have laws passed about this.

People can set up their web servers however they like. It's on them to do that, it's their web servers. I don't think there should be legislation about whether you're allowed to issue perfectly ordinary HTTP requests to a public server, let the server decide how to respond to them.

[-] taladar@sh.itjust.works 12 points 2 months ago

Rate limiting in itself requires resources that are not always available. For one thing you can only rate limit individuals you can identify so you need to keep data about past requests in memory and attach counters to them and even then that won't help if the requests come from IPs that are easily changed.

[-] chrash0@lemmy.world 18 points 2 months ago

i doubt the recent uptick in traffic is from “stealing data” for training but rather from agents scraping them for context, eg Edge Copilot, Google’s AI search, SearchGPT, etc.

poisoning the data will likely not help in this situation since there’s a human on the other side that will just do the same search again given unsatisfactory results. like how retries and timeouts can cause huge outages for web scale companies, poisoning search results will likely cause this type of traffic to increase and further increase the chances of DoS and higher bandwidth usage.

[-] TheBlackLounge@lemm.ee 7 points 2 months ago

So? Break context scrapers till they give up, on your site or completely.

[-] chrash0@lemmy.world 2 points 2 months ago
this post was submitted on 02 Apr 2025
589 points (99.3% liked)

Technology

71356 readers
3502 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS