370
Proton's biased article on Deepseek
(lemmy.ml)
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Community icon from opensource.org, but we are not affiliated with them.
You can run an imitation of the DeepSeek R1 model, but not the actual one unless you literally buy a dozen of whatever NVIDIA’s top GPU is at the moment.
A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost "only" ~$10k rather than 100k+...
I saw posts about people running it well enough for testing purposes on an NVMe.
Can you link that post?
https://old.reddit.com/r/LocalLLaMA/comments/1idseqb/deepseek_r1_671b_over_2_toksec_without_gpu_on/
That's cool! I'm really interested to know how many tokens per second you can get with a really good U.2. My gut is that it won't actually be better than the 24VRAM+96RAM cache setup this user already tested with though.
Thanks!