5
submitted 1 month ago by JOMusic@lemmy.ml to c/technology@lemmy.world

Article: https://proton.me/blog/deepseek

Calls it "Deepsneak", failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can't speak for Proton, but the last couple weeks are showing some very clear biases coming out.

you are viewing a single comment's thread
view the rest of the comments
[-] AstralPath@lemmy.ca 0 points 1 month ago

Has anyone actually analyzed the source code thoroughly yet? I've seen a ton of reporting on its open source nature but nothing about the detailed nature of the source.

FOSS only = safe if the code has been audited in depth.

[-] Fubarberry@sopuli.xyz 1 points 1 month ago

I haven't looked into Deepseek specifically so I could be mistaken, but a lot of times when a model is called "open-source" it really is just open weights. You can download it or train other models off of it, but you can't actually view any kind of source code on how the model works.

An audit isn't really possible.

[-] L_Acacia@lemmy.ml 2 points 1 month ago

It is open-weight, we dont have access to the training code nor the dataset.

That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.

[-] red@sopuli.xyz -1 points 1 month ago

It's been noted that the apps by the company do send each and every keystroke back to china, though.

Who's to say how poisoned the data in reality is.

[-] AstralPath@lemmy.ca -1 points 1 month ago

Then by default it should never be considered safe. Honestly, this "open" release... it makes me wonder about ulterior motives.

[-] rumba@lemmy.zip -1 points 1 month ago

That's not quite it either.

The model itself is just a giant ball of math. They made a thing that can transform an English through the collected knowledge of much of humanity a few dozen times and have it crap out a reasonable English answer.

The open source part is kind of a misnomer. They explained how they cooked the meal but not the ingredient list.

To complete the analogy, their astounding claim is that they managed to cook the meal with less fire than anyone else has by a factor of like 1000.

But the model itself is inherently safe. It's not like it's a binary that can carry a virus or do crazy crap. Even convincing it to do give planned nefarious answers is frankly beyond our capabilities so far.

The dangerous part that proton is looking at and honestly is a given for any hosted AI, is in the hosting server side of things. You make your requests to their servers and then their servers put the requests into the model and return you the output.

If you ask their web servers for information about tiananmen square they will block you.

You can, however, download the model yourself and run it yourself and there's not any security issues there.

It will tell you anything that you need to know about tiananmen square.

this post was submitted on 31 Jan 2025
5 points (100.0% liked)

Technology

66263 readers
1684 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS