Almost none of this data is possible to collect when using Tor Browser
Anyone has these data from Mistral, HuggingChat and MetaAI ? Would be nice to add them too
Edit : Leo from brave would be great to compare too
Note this is if you use their apps. Not the api. Not through another app.
Not that we have any real info about who collects/uses what when you use the API
Am I missing something? What do the numbers mean in relation to the type? Sub types?
It's labeled "Unique data points". See the number 2 - Usage Data for Gemini, there's an arrow with label there.
Thank you I totally missed that.
perhaps it's the limit imof each data type?!
gemini harvests only your first four cobtacts, your last two locations, and so on.
how does one defeat that? have fewer than four friends and don't go out!
Ask people for their phone number to add to your contacts and give them your phone for a day
Who TF using Grok.
Fascists. Why?
I'm interested in seeing how this changes when using duck duck go front end at duck.ai
there's no login and history is stored locally (probably remotely too)
Me when Gemini (aka google) collects more data than anyone else:
Not really shocked, we all know that google sucks
I would hazard a guess that the only reason those others aren't as high is because they don't have the same access to data. It's not that they don't want to, they simply can't (yet).
Locally run AI: 0
Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.
If by more learning you mean learning
ollama run deepseek-r1:7b
Then yeah, it's a pretty steep curve!
If you're a developer then you can also search "$MyFavDevEnv use local ai ollama" to find guides on setting up. I'm using Continue extension for VS Codium (or Code) but there's easy to use modules for Vim and Emacs and probably everything else as well.
The main problem is leveling your expectations. The full Deepseek is a 671b (that's billions of parameters) and the model weights (the thing you download when you pull an AI) are 404GB in size. You need so much RAM available to run one of those.
They make distilled models though, which are much smaller but still useful. The 14b is 9GB and runs fine with only 16GB of ram. They obviously aren't as impressive as the cloud hosted big versions though.
My assumption is always the person I am talking to is a normal window user who don't know what a terminal is. Most of them even freak out when they see "the black box with text on it". I guess on Lemmy the situation is better. It is just my bad habit.
normal window user who don’t know what a terminal is. Most of them even freak out when they see “the black box with text on it”.
Good point! That being said I'm wondering how we could help anybody, genuinely being inclusive, on how to transform that feeling of dread, basically "Oh, that's NOT for me!", to "Hmmm that's the challenging part but it seems worth it and potentially feasible, I should try". I believe it's important because in turn the "normal window user" could potentially understand limitations hidden to them until now. They would not instantly better understand how their computer work but the initial reaction would be different, namely considering a path of learning.
Any idea or good resources on that? How can we both demystify the terminal with a pleasant onboarding? How about a Web based tutorial that asks user to try side by side to manipulate files? They'd have their own desktop with their file manager on one side (if they want to) and the browser window with e.g. https://copy.sh/v86/ (WASM) this way they will lose no data no matter what.
Maybe such examples could be renaming files with ImagesHoliday_WrongName.123.jpg to ImagesHoliday_RightName.123.jpg then doing that for 10 files, then 100 files, thus showing that it does scale and enables ones to do things practically impossible without the terminal.
Another example could be combining commands, e.g. ls to see files then wc -l to count how many files are in directory. That would not be very exciting so then maybe generating an HTML file with the list of files and the file count.
Honestly I believe finding the right examples that genuinely showcases the power of the terminal, the agency it brings, is key!
No worries! You're probably right that it's better not to assume, and it's good of you to provide some different options.
Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.
If only my hardware could support it..
I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).
You'd be surprised how mich can be done with how little.
DeepSeek at home: None
How much VRAM does your machine have? Are you using open webui?
And what about goddamn Mistral?
Its French as far as I know so at least it abides to gdpr by default.
All services you see above are provided to EU citizens, which is why they also have to abide by GDPR. GDPR does not disallow the gathering of information. Google, for example, is GDPR compliant, yet they are number 1 on that list. That’s why I would like to know if European companies still try to have a business case with personal data or not.
If it's one thing I don't trust its non-EU companies following GDPR. Sure they're legally bound to, but l mean Meta doesn't care so why should the rest.
(Yes I'm being overly dramatic about this, but I've lost trust ages ago in big tech companies)
Fully agree, which is also why I choose EU/Swiss made services by default
It doesn't mean they "have to abide by GDPR" or that they "are GDPR compliant". All it means is they appear to be GDPR compliant and pretend to respect user privacy. The sole fact that the AI chatbots are run in US-based data centres is against GDPR. The EU has had many different personal data transfer agreements with the US, all of which were canceled shortly after signing due to US corporations breaking them repeatedly (Facebook usually being the main culprit).
I tried to say that, but you were better at explaining, so thank you. Without a court case, you will essentially never know, if they are truly GDPR compliant
Who would have guessed that the advertising company collects a lot of data
And I can't possibly imagine that Grok actually collects less than ChatGPT.
Data from surfshark aka nordvpn lol. Take it with a few chunks of salt
Back in the day, malware makers could only dream of collecting as much data as Gemini does.
I have a bridge to sell you if you think grok is collecting the least amount of info.
Grok's business model isn't collecting your data.
It's feeding you propaganda.
They are more interested in the other direction of data flow.
Or you could use Deepseek's workaround and run it locally. You know, open source and all.
Is there away to fake all the data they try to collect?
I just came across this article which for people who are into self hosting can take a look and participate. It's basically a tool that generating never ending web pages with non sense that load slow (but not too slow the AI tools move on) to slow down and thus cost them more to scrape the internet if enough people are doing it. You can also hide it in a way that legit user would never see this on your site:
https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/ https://zadzmo.org/code/nepenthes/
Pretty sure this is what they scrape from your device if you install their app. I dont know how else they would get access to contacts and location and stuff. So yeah you can just run it on a virtual android device and feed it garbage data, but i assume the app or their backend will detect that and throw out your data.
How about if I only use the web version?
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)