11
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Jan 2025
11 points (92.3% liked)
LocalLLaMA
2410 readers
31 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
Huh so basicly sidestepping the gpu issue entirly and essentially just using some other special piece of silicon with fast (but conventional ram). I still dont understand why u cant distribute a large llm over many different processors each holding a section of the parameters in memory.
Not exactly. Digits still uses a Blackwell GPU, only it uses unified RAM as virtual VRAM instead of actual VRAM. The GPU is probably a down clocked Blackwell. Speculation I've seen is that these are defective and repurposed Blackwells; good for us. By defective I mean they can't run at full speed or are projected to have the cracking die problem, etc.