330
submitted 1 day ago* (last edited 1 day ago) by will_a113@lemmy.ml to c/privacy@lemmy.ml

A chart titled "What Kind of Data Do AI Chatbots Collect?" lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.

  • Gemini: Collects all 10 data types; highest total at 22 data points
  • Claude: Collects 7 types; 13 data points
  • CoPilot: Collects 7 types; 12 data points
  • Deepseek: Collects 6 types; 11 data points
  • ChatGPT: Collects 6 types; 10 data points
  • Perplexity: Collects 6 types; 10 data points
  • Grok: Collects 4 types; 7 data points
you are viewing a single comment's thread
view the rest of the comments
[-] pennomi@lemmy.world 125 points 1 day ago
[-] exothermic@lemmy.world 16 points 23 hours ago

Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.

[-] Kiuyn@lemmy.ml 17 points 22 hours ago* (last edited 22 hours ago)

I recommend GPT4all if you want run locally on your PC. It is super easy.

If you want to run in a separate server. Ollama + some kind of web UI is the best.

Ollama can also be run locally but IMO it take more learning than GUI app like GPT4all.

[-] codexarcanum@lemmy.dbzer0.com 8 points 22 hours ago

If by more learning you mean learning

ollama run deepseek-r1:7b

Then yeah, it's a pretty steep curve!

If you're a developer then you can also search "$MyFavDevEnv use local ai ollama" to find guides on setting up. I'm using Continue extension for VS Codium (or Code) but there's easy to use modules for Vim and Emacs and probably everything else as well.

The main problem is leveling your expectations. The full Deepseek is a 671b (that's billions of parameters) and the model weights (the thing you download when you pull an AI) are 404GB in size. You need so much RAM available to run one of those.

They make distilled models though, which are much smaller but still useful. The 14b is 9GB and runs fine with only 16GB of ram. They obviously aren't as impressive as the cloud hosted big versions though.

[-] smee@poeng.link 1 points 4 hours ago

Or if using flatpak, its an add-on for Alpaca. One click install, GUI management.

Windows users? By the time you understand how to locally install AI, you're probably knowledgeable enough to migrate to linux. What the heck is the point of using local AI for privacy while running windows?

[-] Kiuyn@lemmy.ml 5 points 17 hours ago* (last edited 17 hours ago)

My assumption is always the person I am talking to is a normal window user who don't know what a terminal is. Most of them even freak out when they see "the black box with text on it". I guess on Lemmy the situation is better. It is just my bad habit.

[-] utopiah@lemmy.ml 2 points 11 hours ago

normal window user who don’t know what a terminal is. Most of them even freak out when they see “the black box with text on it”.

Good point! That being said I'm wondering how we could help anybody, genuinely being inclusive, on how to transform that feeling of dread, basically "Oh, that's NOT for me!", to "Hmmm that's the challenging part but it seems worth it and potentially feasible, I should try". I believe it's important because in turn the "normal window user" could potentially understand limitations hidden to them until now. They would not instantly better understand how their computer work but the initial reaction would be different, namely considering a path of learning.

Any idea or good resources on that? How can we both demystify the terminal with a pleasant onboarding? How about a Web based tutorial that asks user to try side by side to manipulate files? They'd have their own desktop with their file manager on one side (if they want to) and the browser window with e.g. https://copy.sh/v86/ (WASM) this way they will lose no data no matter what.

Maybe such examples could be renaming files with ImagesHoliday_WrongName.123.jpg to ImagesHoliday_RightName.123.jpg then doing that for 10 files, then 100 files, thus showing that it does scale and enables ones to do things practically impossible without the terminal.

Another example could be combining commands, e.g. ls to see files then wc -l to count how many files are in directory. That would not be very exciting so then maybe generating an HTML file with the list of files and the file count.

Honestly I believe finding the right examples that genuinely showcases the power of the terminal, the agency it brings, is key!

[-] codexarcanum@lemmy.dbzer0.com 2 points 17 hours ago

No worries! You're probably right that it's better not to assume, and it's good of you to provide some different options.

[-] pennomi@lemmy.world 10 points 23 hours ago

Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.

[-] nimpnin@sopuli.xyz 1 points 21 hours ago

I used this a while back, it was pretty straightforward https://github.com/nathanlesage/local-chat

[-] skarn@discuss.tchncs.de 1 points 22 hours ago* (last edited 22 hours ago)

If you want to start playing around immediately, try Alpaca if Linux, LMStudio if Windows. See if it works for you, then move from there.

Alpaca actually runs its own Ollama instance.

[-] smee@poeng.link 1 points 4 hours ago

Ollama recently became a flatpak extension for Alpaca but it's a one-click install from the Alpaca software management entry. All storage locations are the same so no need to re-DL any open models or remake tweaked models from the previous setup.

[-] SeekPie@lemm.ee 1 points 12 hours ago

And if you want to be 100% sure that Alpaca doesn't send any infoa anywhere, you can restrict it's network access in flatsral as it's a flatpak.

[-] nimpnin@sopuli.xyz 1 points 21 hours ago

I used this a while back, it was pretty straightforward https://github.com/nathanlesage/local-chat

[-] TuxEnthusiast@sopuli.xyz 12 points 1 day ago* (last edited 11 hours ago)

If only my hardware could support it..

[-] smee@poeng.link 1 points 4 hours ago

It's possible to run local AI on a Raspberry Pi, it's all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.

[-] skarn@discuss.tchncs.de 5 points 22 hours ago

I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).

You'd be surprised how mich can be done with how little.

this post was submitted on 15 Apr 2025
330 points (96.6% liked)

Privacy

36809 readers
1305 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS