11

Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.

Their initiative, dubbed Poison Fountain, asks website operators to add links to their websites that feed AI crawlers poisoned training data. It's been up and running for about a week.

AI crawlers visit websites and scrape data that ends up being used to train AI models, a parasitic relationship that has prompted pushback from publishers. When scaped data is accurate, it helps AI models offer quality responses to questions; when it's inaccurate, it has the opposite effect.

you are viewing a single comment's thread
view the rest of the comments
[-] FauxLiving@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

If you're interested in like this line of attack, you can also use similar techniques to defeat models that are trained to do object detection (like, for example, the ones that detect the location of your license plate) using adversarial noise attacks.

The short version is, if you have a network that does detection, you can run inference with that network on images that have been altered by another network and have the second network use the confidence of the detection network in its loss function. The second model can be trained to create noise, which looks innocuous to human eyes, that maximally disrupts the segmentation/object detection process of the target/detection network.

You could then print this noise on, say, a transparent overlay and put it on your license plate and automated license plate readers (ALPRs) would not be able to detect/read your plates. Note: Flock is aware of this technique and has lobbied state lawmakers to make putting anything on your plate to disrupt automated reading illegal in some places, check your laws.

Benn Jordan has actually created and trained such a network video here: https://www.youtube.com/watch?v=Pp9MwZkHiMQ

And also uploaded his code, PlateShapez to github: https://github.com/bennjordan

In states where you cannot cover your license plate you're not restricted from decorating the rest of your car. You could use a similar technique to create bumper stickers that are detected as license plates and place them all over your vehicle. Or, even, as Benn suggested, print them with UV ink so they're invisible to humans but very visible to AI cameras who often use UV lamps to provide night vision/additional illumination.

You could also, if you were so inclined, generate bumper stickers or a vinyl wrap which could make the detector be unable to even detect a car.

Adversarial noise attacks are one of the bigger vulnerabilities of AI-based systems and they come in many flavors and can affect anything that uses a neural network.

Another example (also from the video) is that you can encode voice commands in plain audio which, to the user is completely transparent but a device (like Alexa or Siri) will hear it as a specific command ("Hey Siri, unlock the front door"). Any user-generated audio that you encounter online can have this kind of attack encoded in it, the potential damage is pretty limited because AI assistants don't really control critical functions in your life yet... but you should probably not let your assistant listen to TikTok if it can do more than control your home lighting.

this post was submitted on 11 Jan 2026
11 points (100.0% liked)

Technology

79463 readers
810 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS