view the rest of the comments
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
This is one of the dumbest things I've ever seen.
Anyone who thinks this is going to work doesn't understand the concept of signal to noise.
Let's say you are an artist who draws cats. And you are super worried big tech is going to be able to use your images to teach AI what a cat looks like. So you instead use this to pixel mangle it to bias towards looking like a lizard.
Over there is another artist who also draws cats and is worried about AI. So they use this tool to make cats bias towards looking like horses.
All that bias data taken across thousands of pictures of cats ends up becoming indistinguishable from noise. There's no more hidden bias signal.
The only way this would work is if the majority of all images in the training data of object A all had hidden bias towards object B (as were the very artificial conditions used in the paper).
This compounds by multiple axes for what you'd want to bias. If you draw fantasy cats, are you only biasing away from cats to dogs? Or are you also going to try to bias against fantasy to pointillism? You can always bias towards pointillism dogs, but now your poisoning is less effective combined with a cubist cat artist biasing towards anime dogs.
As you dilute the bias data by trying to cover multiple aspects that can be learned from your images by AI, you further plummet the signal into noise such that even if there was collective agreement on how to bias each individual axis, it'd be effectively worthless in a large and diverse training set.
This is dumb.