203
submitted 10 months ago by throws_lemy@lemmy.nz to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] evranch@lemmy.ca 2 points 10 months ago

Start off with the Tinyllama model, it's under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a "quantized" model, they are distributed as GGUF files.

I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it's still impressive for what it is, and you can learn the jargon which is the hard part.

Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl to offload layers from cpu to GPU.

Not sure what system you're using, most AI development is done on Linux so if you're on Windows I can't guarantee anything will work.

Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it's neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can't seem to reliably handle MQTT and JSON but slightly smarter models can with ease.

this post was submitted on 19 Jan 2024
203 points (95.9% liked)

Technology

59623 readers
991 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS