Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".
Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , "PHILLIPS DVD+R"
So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Just interested in the topic did you 🔨 offline privately?
I'm not an expert on them or anything, but feel free
Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".
Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , "PHILLIPS DVD+R"
So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Just interested in the topic did you 🔨 offline privately?
I'm not an expert on them or anything, but feel free