11
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Jan 2025
11 points (92.3% liked)
LocalLLaMA
2410 readers
27 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
The biggest issue will be your VRAM. If you don't have enough of it (which is very likely, even the 8B models I use need ~10gb), you'll have to use a GGUF model which will need to use your system RAM and CPU for the parts that don't fit in the VRAM, which will heavily slow it down.