127
submitted 8 months ago by starman@programming.dev to c/privacy@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] swordsmanluke@programming.dev 3 points 8 months ago

It's not as good, but running small LLMs locally can work. I've been messing around with ollama, which makes it drop dead simple to try out different models locally.

You won't be running any model as powerful as ChatGPT - but for quick "stack overflow replacement" style of questions I find it's usually good enough.

And before you write off the idea of local models completely, some recent studies indicate that our current models could be made orders of magnitude smaller for the same level of capability. Think Moore's law but for shrinking the required connections within a model. I do believe we'll be able to run GPT3.5-level models on consumer grade hardware in the very near future. (Of course, by then GPT-7 may be running the world but we live in hope).

[-] Facebones@reddthat.com 1 points 8 months ago
[-] HumanPerson@sh.itjust.works 1 points 8 months ago

GPT4all is another good local one. Runs on CPU but you can use GPU acceleration. Some models even run on my crappy dual core laptop.

this post was submitted on 20 Feb 2024
127 points (95.7% liked)

Privacy

31601 readers
692 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS