A lot of the ways they scrape documents are the same used by accessibility tools, so I'd generally recommend against doing this.
So a layer of transparent text wouldn't work?
Nightshade doesnt actually work btw. Denoising, a common technique, also breaks nightshade completely. Its also closed source, with no way to test if it actually works for the big AIs. The person making nightshade is really fishy too.
Most actual poisoning techniques don't actually work that well. When I end up with a PDF, I usually strip out the existing text layer, apply a denoiser and a few other preprocessing steps to correct common errors, then a layout / reading order detector, and finally OCR the different blocs. This is against the most common poisoning techniques, and one of the most efficient, called : someone printed a document, forgot about it for 3 years, then scanned it slightly tilted (and dirty, crumpled, ...), and the scanner decided to apply its crappy OCR.
Using screenshots of the PDF also avoid any kind of font face poisoning, and anti copy protection.
If you really, really need to protect your PDF, please consider accessibility first, then what would work imho is to use the scripting features of pdf to actually render your content on the fly. That would probably mess up most of the "automatic" processes.
Entire Bee Movie script in 0.1pt white on white in the header
"Why TF is this one-page document half a gigabyte?"
Text is small! The Bee Movie script is 89.2kb
Obviously you need some redundancy in case the script gets corrupted. 5000 repetitions seems reasonable for such a high quality work
Would the Shrek script be compatible too?
I don't think any kind of "poisoning" actually works. It's well known by now that data quality is more important than data quantity, so nobody just feeds training data in indiscriminately. At best it would hamper some FOSS AI researchers that don't have the resources to curate a dataset.
At best it would hamper some FOSS AI researchers that don't have the resources to curate a dataset.
If you can't source a dataset, then you shouldn't be researching AI. It's the first and single most important step of the entire process.
man rot13
;)
Image poisoning's general principle is to change pixels in a way were our eye can't notice, but that screw up the labeling by LLMs.
You can probably try to apply the same principle, poison the PDF in a way that only humans can read it.
Thing is, I assume you distribute your content on PDFs to make the content accessible to humans. That usually means having the text embedded for easy copy-paste and similar methods. Poisoning these might end up being counterproductive for your objective.
All this to say that No, I have no idea of a poisoning algorithm for PDFs
Put the word stolen at the end of every document, the llm will learn that the word stolen is normal and should be included
Open Source
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.