657
The Rule (lemmy.ml)
submitted 3 months ago by roon@lemmy.ml to c/196@lemmy.blahaj.zone
you are viewing a single comment's thread
view the rest of the comments
[-] PriorityMotif@lemmy.world 7 points 3 months ago

You can probably find a used workstation/server capable of using 256GB of RAM for a few hundred bucks and get at least a few gpus in there. You'll probably spend a few hundred on top of that to max out the ram. Performance doesn't go up much past 4 gpus because the CPU will have a difficult time dealing with the traffic. So for a ghetto build you're looking at $2k unless you have a cheap/free local source.

[-] areyouevenreal@lemm.ee 3 points 3 months ago

Without sufficient VRAM it probably couldn't be GPU accelerated effectively. Regular RAM is for CPU use. You can swap data between both pools, and I think some AI engines do this to run larger models, but it's a slow process and you probably wouldn't gain much from it without using huge GPUs with lots of VRAM. PCIe just isn't as fast as local RAM or VRAM. This means it would still run on the CPU, just very slowly.

[-] AdrianTheFrog@lemmy.world 1 points 3 months ago

PCIe will probably be the bottleneck way before the number of GPUs is, if you're planning on storing the model in ram. Probably better to get a high end server CPU.

this post was submitted on 25 Jul 2024
657 points (100.0% liked)

196

16450 readers
2296 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS