495
submitted 1 year ago by lidd1ejimmy@lemmy.ml to c/memes@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] neidu2@feddit.nl 47 points 1 year ago* (last edited 1 year ago)

Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".

Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.

Or how many 1.44MB floppies you would need for the actual program and smallest viable model.

[-] Naz@sh.itjust.works 16 points 1 year ago

squints

That says , "PHILLIPS DVD+R"

So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>

[-] curbstickle@lemmy.dbzer0.com 13 points 1 year ago

llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.

[-] BudgetBandit@sh.itjust.works 1 points 1 year ago

Just interested in the topic did you 🔨 offline privately?

[-] curbstickle@lemmy.dbzer0.com 1 points 1 year ago

I'm not an expert on them or anything, but feel free

[-] kindenough@kbin.earth 8 points 1 year ago* (last edited 1 year ago)
[-] Num10ck@lemmy.world 7 points 1 year ago

ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.

[-] lidd1ejimmy@lemmy.ml 4 points 1 year ago

yes i guess it would be a funny experiment for just a local model

[-] veroxii@aussie.zone 3 points 1 year ago

pkzip c:\chatgpt*.* a:\chatgpt.zip -&

this post was submitted on 18 Jul 2024
495 points (98.6% liked)

Memes

51795 readers
2649 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS