156
NVIDIA’s new AI chatbot runs locally on your PC
(www.engadget.com)
This is a most excellent place for technology news and articles.
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
Source?