74

Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

top 49 comments
sorted by: hot top controversial new old
[-] rumba@lemmy.zip 3 points 1 hour ago

This is a dangerous proposition.

When the dictatorship comes after you, they're not concerned about the whole of every article that was written about you All they care about are the things they see as incriminating.

You could literally take a spell check dictionary list, pull three words out of the list at random and feed it into a ollama asking for a story with your name that included the three words as major points in the story.

Even on a relatively old video card, you could probably crap out three stories a minute. Have it write them in HTML and publish the site map into major search engines on a regular basis.

[-] chonkyninja@lemmy.world 2 points 1 hour ago

There’s plenty of tools already, that can create many profiles of you, each with complete different personalities and posts.

[-] calidris@hexbear.net 2 points 1 hour ago

Do you have a link to the talk? I looked through her youtube and didn't see anything that quite matched this topic.

[-] relic4322@lemmy.ml 2 points 3 hours ago

Ok, got another one for ya based on some comments below. You have all the usual addons to block ads and such, but you create a sock-puppet identify, and use AI to "click" ads in the background (stolen from a comment) that align with that identity. You dont see the ads, but the traffic pattern supports the identity you are wearing.

So rather than random, its aligned with a fake identity.

[-] moseschrute@lemmy.ml 91 points 10 hours ago* (last edited 10 hours ago)

I feel like I woke up in the stupidest timeline where climate change is about to kill us, we decide stupidly to 10x our power needs by shoving LLMs down everyone’s throats, and the only solution to stay private is to 10x our personal LLM usage by generating tons of noise about us just to stay private. So now we’re 100x ing everyone’s power usage and we’re going to die even sooner.

I think your idea is interesting – I was also thinking that same thing awhile back – but how tf did we get here.

[-] Reverendender@sh.itjust.works 27 points 10 hours ago

but how tf did we get here

With capitalistic gusto! 🤮

[-] tisktisk@piefed.social 12 points 9 hours ago

Just buy more? Idk I'm all out of ideas

[-] octobob@lemmy.ml 3 points 7 hours ago* (last edited 7 hours ago)

Yeah agreed. What's going on in my state of Pennsylvania is they're reopening the Three Mile Island nuclear plant out near Harrisburg for the sole reason of powering Microsoft's AI data centers. This will be Unit 1 which was closed in 2019. Unit 2 was the one that was permanently closed after the meltdown in 1979.

I'm all for nuclear power. I think it's our best option for an alternative energy source. But the only reason they're opening the plant again is because our grid can't keep up with AI. I believe the data centers is the only thing the nuke plant will power.

I've also seen the scale of things in my work in terms of power demands. I'm an industrial electrical technician, and part of our business is the control panels for cooling the server racks for Amazon data centers. They just keep buying more more and more of them, projected til at least 2035 right now. All these big tech companies are totally revamping everything for AI. Like before a typical rack section might have drawn let's say 1000 watts, now it's more like 10,000 watts. Again, just for AI.

[-] moseschrute@lemmy.ml 1 points 5 hours ago

Totally agree nuclear is a great tool but totally being used for the wrong purpose here. Use those power plants to solve our existing energy crisis before you crate an even bigger energy crisis.

[-] blargh513@sh.itjust.works 3 points 7 hours ago

There are ais that can detect use of ai. This is a losing strategy as we burn resources playing cat and mouse.

As with all things greed is at the root of this problem. Until privacy has any legislative teeth, it will continue to be a notion for the few and an elusive one at that.

[-] stupid_asshole69@hexbear.net 5 points 5 hours ago

This isn’t a very smart idea.

People trying to obfuscate their actions would suddenly have massive associated datasets of actions to sift through and it would be trivial to distinguish between the browsing behaviors of a person and a bot.

Someone else said this is like chaff or flare anti missile defense and that’s a good analog. Anti missile defenses like that are deployed when the target recognizes a danger and sees an opportunity to confuse that danger temporarily. They’re used in conjunction with maneuvering and other flight techniques to maximize the potential of avoiding certain death, not constantly once the operator comes in contact with an opponent.

On a more philosophical tip, the masters tools cannot be turned against him.

[-] interdimensionalmeme@lemmy.ml 2 points 5 hours ago* (last edited 5 hours ago)

I still think I can turn it against it

[-] stupid_asshole69@hexbear.net 3 points 2 hours ago

spray-bottle

No, you can’t.

You are not the hero, effortlessly weaving down the highway between minivans on your 1300cc motorcycle, katana strapped across your back, using dual handlebar mounted twiddler boards to hack the multiverse.

If ai driven agentic systems were used to obfuscate a persons interactions online then the fact that they were using those systems would become incredibly obvious and provide a trove of information that could be easily used to locate and document what that person was doing.

But let’s assume what the op did worked, and no one could tell the difference.

That would be worse! Suddenly there’s hundreds of thousands of data points that could be linked to you and all that’s needed for a warrant are two or three that could be interpreted as probable cause of a crime!

You thought you were helping yourself out by turning the fuzzer on before reading trot pamphlets hosted on marxists.org but now they have an expressed interest in drain cleaner and glitter bombs and best case scenario you gotta adopt a new pitt mix from the humane society.

[-] Core_of_Arden@lemmy.ml 5 points 6 hours ago

So, she is talking about an AI-war? Where those who don't want us to be private, controls the weapons? Anyone else see a problem with that logic?

Thousands of "you" browsing different sites, will use an obscene amount of power and bandwidth. Imagine a million people doing that, not a billion... That's just stupid in all kinds of ways.

[-] fubbernuckin@lemmy.dbzer0.com 11 points 7 hours ago* (last edited 7 hours ago)

I don't know if there's a clean way to do this right now, but I'd love to see a software project dedicated to doing this. Once a data set is poisoned it becomes very difficult to un-poison. The companies would probably implement some semi-effective but heavy-handed means of defending against it if it actually affected them, but I'm all for making them pay for that arms race.

[-] Ulrich@feddit.org 8 points 7 hours ago

I have been a longtime advocate of data poisoning. Especially in the case of surveillance pricing. Unfortunately there doesn't seem to be many tools for this outside of AdNauseum.

[-] SendMePhotos@lemmy.world 16 points 9 hours ago

Obscuration is what you're thinking and it works with things like adnauseun (firefox add on that will click all ads in the background to obscure preference data). It's a nice way to smear the data and probably better to do sooner (while the data collection is in infancy) rather than later (where the companies may be able to filter obscuration attempts).

I like it. I am really not a fan of being profiled, collected, and categorized. I agree with others, I hate this time line. It's so uncanny.

[-] HelloRoot@lemy.lol 2 points 8 hours ago

I still don't really understand adnauseum. What is the difference in privacy compared to clicking on none of the ads?

[-] SendMePhotos@lemmy.world 1 points 8 hours ago

Whatever data profile they already have on your can be obscured to make it useless vs them probably trickling in data.

Think of it like um...

Having a picture of you with a moderate amount of notes that are accurate, vs having a picture of you with so much irrelevant/inaccurate data that you can't be certain of anything.

[-] HelloRoot@lemy.lol 5 points 8 hours ago* (last edited 8 hours ago)

But the picture of me they have is: doesn't click ads like all the other adblocker people (which is accurate)

Why would I want to change it to: clicks ALL the ads like all the other adnauseum people (which is also accurate)

[-] JustinTheGM@ttrpg.network 1 points 8 hours ago

They build this picture from many other sources besides ad clicks, so the point is to obscure that. Problem is, if you're only obscuring your ad click behavior, it should be relatively easy to filter out of the model.

[-] HelloRoot@lemy.lol 1 points 8 hours ago* (last edited 7 hours ago)

You are just moving the problem one step further, but that doesn't change anything (if I am wrong please correct me).

You say it is ad behaviour + other data points.

So the picture of me they have is: [other data] + doesn’t click ads like all the other adblocker people (which is accurate)

Why would I want to change it to: [other data] + clicks ALL the ads like all the other adnauseum people (which is also accurate)

How does adnauseum or not matter? I genuinely don't get it. It's the same [other data] in both cases. Whether you click on none of the ads or all of the ads can be detected.


As a bonus, if adnauseum would click just a couple random ads, they would have a wrong assumption of my ad clicking behaviour.

But if I click none of the ads they have no accurate assumption of my ad clicking behaviour either.

Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests, which are collected via other browsing behavious from your ad clicking behaviour if they contradict each other or when one of the two seems random.

[-] Ulrich@feddit.org 3 points 7 hours ago

[other data] + clicks ALL the ads like all the other adnauseum people

adnauseum does not click "all the other ads", it just clicks some of them. Like normal people do. Only those ads are not relevant to your interests, they're just random, so it obscures your online profile by filling it with a bunch of random information.

Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests

Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

[-] HelloRoot@lemy.lol 1 points 6 hours ago* (last edited 5 hours ago)

adnauseun (firefox add on that will click all ads in the background to obscure preference data)

is what the top level comment said, so I went off this info. Thanks for explaining.

Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

I didn't mean it like that.

I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

But ofc I may be wrong. Cheers

[-] Ulrich@feddit.org 2 points 4 hours ago

filtering out random false data seems trivial

As far as I know, none of them had random false data so I'm not sure why you would think that?

In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

I feel like you're greatly exaggerating the level of intelligence at work here. It's not hard to figure out people's political affiliations with something as simple as their browsing history, and it's not hard to manipulate them with propaganda accordingly. They did not have an "exact customized lie" for every individual, they just grouped individuals into categories (AKA profiling) and showed them a select few forms of disinformation accordingly.

[-] HelloRoot@lemy.lol 1 points 4 hours ago* (last edited 4 hours ago)

Good input, thank you.


As far as I know, none of them had random false data so I’m not sure why you would think that?

You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) "Moving the knight in front of the bishop is like a punch in the face from mike tyson."


There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

[-] Ulrich@feddit.org 0 points 3 hours ago

You can use topic B as an illustration for topic A

Sometimes yes. In this case, no.

Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

I think the users of such products are extremely low (especially since they've been kicked from Google store) that it wouldn't be worth their time.

But no, I don't think they could either. It's just an automation script that runs actions the same way you would.

[-] dodgeflailimpose@lemmy.zip 1 points 7 hours ago

Did not know this existed. I like the concept

[-] regedit@feddit.online 6 points 8 hours ago

I did this with period trackers. I'm male and my wife and I would always chuckle when my period was about to start.

[-] Thorned_Rose@sh.itjust.works 2 points 2 hours ago

Drip. Don't give companies menstrual data at all ☺️

[-] dodgeflailimpose@lemmy.zip 3 points 7 hours ago

Like pretending to be menstruating? Or as a joke?

[-] regedit@feddit.online 1 points 1 hour ago

To fuck with their metrics and tracking attempts for legit female menstrual cycles.

[-] edel@lemmy.ml 4 points 7 hours ago

First, Naomi and her team are doing a fantastic work in security for masses, easily top 5 worldwide!

AI is capable but we are still failing at program it properly, gosh, even well funded companies are still doing a poor job at it... (just look at the misplaced ads and ineffective we still get.)

What I want, and it is easy to do TODAY, is AI checking our FOSS... so many we use and just a tiny, tiny minority of it goes with some scrutiny. We need AI to go through the FOSS code looking for maliciousness now.

[-] relic4322@lemmy.ml 10 points 9 hours ago

This is like chaff, and I think it would work. But you would have to deal with the fact that whatever patterns it was showing you were doing "you would be doing".

I think there are other ways that AI can be used for privacy.

For example, did you know that you can be identified by how you type/speak online? what if you filtered everything you said through an LLM first, normalizing it. Takes away a fingerprinting option. Could use a pretty small local LLM model that could run on a modest local desktop...

[-] dodgeflailimpose@lemmy.zip 4 points 7 hours ago

I really like this idea

[-] a14o@feddit.org 12 points 10 hours ago

It's a good idea in theory, but it's a challenging concept to have to explain to immigration officials at the airport.

[-] WalnutLum@lemmy.ml 2 points 1 hour ago* (last edited 1 hour ago)

"it says here you clicked 'sign me up for ISIS' 10000 times?"

"Haha no officer, you see it was my social chaff AI that clicked it"

[-] wise_pancake@lemmy.ca 8 points 9 hours ago

In a different direction now is a good time to start looking at how local AI can liberate us from big tech.

[-] dodgeflailimpose@lemmy.zip 2 points 7 hours ago

Local AI requires Investments in local compute power which sadly is not affordable for private users. We would need some entity that we can trust to host. I am happy to pay for that

[-] slackness@lemmy.ml 3 points 8 hours ago

You would be able to do this for a short while but unless you can make an agent that's indistinguishable from you or you already have very bot-like traffic, they'd catch up pretty quickly. They aren't going to just let a trillion dollar industry die out because some bots are generating traffic.

[-] Reverendender@sh.itjust.works 4 points 10 hours ago

It’s an interesting concept, but I’m not sure the payoff justifies the effort.

Even with AI-generated noise, you’re still being tracked through logins, device fingerprints, and other signals. And in the process, you would probably end up degrading your own experience; getting irrelevant ads, broken recommendations, or tripping security systems.

There’s also the environmental cost to consider. If enough people ran decoy traffic 24/7, the energy use could become significant. All for a strategy that platforms would likely adapt to pretty quickly.

I get the appeal, but I wonder if the practical downsides outweigh the potential privacy gains.

[-] fubbernuckin@lemmy.dbzer0.com 2 points 6 hours ago

Okay but irrelevant ads is the dream. I'd prefer not to get recommendations at all either. I'll hear from word of mouth what's worthwhile to watch, or I'll look for it myself. Recommendations consistently muddy things up, it makes all modern social media useless, I have no idea how people can put up with it.

[-] Reverendender@sh.itjust.works 1 points 5 hours ago

I agree, which is why this approach to me seems ultimately counterproductive on an individual level.

[-] HelloRoot@lemy.lol 5 points 8 hours ago* (last edited 8 hours ago)

getting irrelevant ads

you guys are getting ads?

[-] edel@lemmy.ml 2 points 7 hours ago

My entire family is ad free for years... with the exception in podcasts. I am tempted to block them too (is there a way now?) but still not too intrusive... it is a way for me to keep connected to the ad world anyways. Now, the moment they abuse them here tii... I'll find a way to block these.

[-] Reverendender@sh.itjust.works 2 points 8 hours ago

I’m not, but OP would if they started opening up their IP and fingerprints to anyone who wants them, in order to inundate those parties with garbage data. Admittedly, I might be missing some clever part of their plan.

[-] blackbrook@mander.xyz 4 points 8 hours ago

Getting more targeted ads is not really in your interest. That is an idea promoted by the ad people.

[-] Reverendender@sh.itjust.works 0 points 8 hours ago

I’m not seeing the relevance of your comment

[-] dodgeflailimpose@lemmy.zip 2 points 7 hours ago

No clever plan. Just picked up this idea and like to see different opinions from people maybe far more advanced in that field

this post was submitted on 29 Jun 2025
74 points (88.5% liked)

Privacy

39217 readers
830 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS