156
NVIDIA’s new AI chatbot runs locally on your PC
(www.engadget.com)
This is a most excellent place for technology news and articles.
Shame they leave GTX owners out in the cold again.
2xxx too. It's only available for 3xxx and up.
The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.
Just use Ollama with Ollama WebUI
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
Source?