80
what the fuck is wrong with prices of memory
(hexbear.net)
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
Look into used workstations and servers. They’re available for relatively cheap still and will do what you want.
A couple months ago I tried running oLlama on a server blade with a ton of RAM and traditional processor cores and performance was still pretty horrible. Is there a better way to do it or do you just need a GPU?
Try running MoE models, like qwen3 30b a3b
You need a gpu. Anything nvidia is fine, more ram is better, but you can use system ram to swap out what you’re doing.
If you’re doing it yourself, consider how smaller models built to do one specific thing can do the job. For example: a small 8gb video card can do text inference and its results can be sent to something like kokoro on cpu for tts and you suddenly have a talking llm on an eight year old budget gpu.
Any tips on a specific server model(s)? Years ago I got a GIANT ebay 4u server which I intended to use as a security video server and run a custum isa 16channel bnc card but none of the linux drivers worked. I then started to write my own driver based on video4linux but the card was so old I couldn't find white paper on i/o specs and scope and logic analyzer couldn't handle the frequencies for reverse engineering so I just used it for a year as a 200lb ~~paperweight~~ samba server. If I could find something cheap and lightweight with modern hardware that could possibly work. If it already came with the ram that would be a nice bonus.
The uhh 13th gen dell servers are fine. They support 1.5t of ram and have lots of expansion slots.