Image is blocked. Try downloading and uploading it to lemmy instead of hotlinking to reddit perhaps.
I think "blitzkrieg" matches somewhat: don't stop to engage every stronghold, just drive around them, isolate them, and cut off their support networks.
In Soviet America, a wrong turn takes your life.
The real meat of the story is in the referenced blog post: https://blog.codingconfessions.com/p/how-unix-spell-ran-in-64kb-ram
TL;DR
If you're short on time, here's the key engineering story:
McIlroy's first innovation was a clever linguistics-based stemming algorithm that reduced the dictionary to just 25,000 words while improving accuracy.
For fast lookups, he initially used a Bloom filter—perhaps one of its first production uses. Interestingly, Dennis Ritchie provided the implementation. They tuned it to have such a low false positive rate that they could skip actual dictionary lookups.
When the dictionary grew to 30,000 words, the Bloom filter approach became impractical, leading to innovative hash compression techniques.
They computed that 27-bit hash codes would keep collision probability acceptably low, but needed compression.
McIlroy's solution was to store differences between sorted hash codes, after discovering these differences followed a geometric distribution.
Using Golomb's code, a compression scheme designed for geometric distributions, he achieved 13.60 bits per word—remarkably close to the theoretical minimum of 13.57 bits.
Finally, he partitioned the compressed data to speed up lookups, trading a small memory increase (final size ~14 bits per word) for significantly faster performance.
There was something wrong here, but the... right kind of wrong.
Looking back, those times were an incredible desert of of titillation compared to the desserts of today.
Text below, for those trying to avoid Twitter:
Most people probably don't realize how bad news China's Deepseek is for OpenAI.
They've come up with a model that matches and even exceeds OpenAI's latest model o1 on various benchmarks, and they're charging just 3% of the price.
It's essentially as if someone had released a mobile on par with the iPhone but was selling it for $30 instead of $1000. It's this dramatic.
What's more, they're releasing it open-source so you even have the option - which OpenAI doesn't offer - of not using their API at all and running the model for "free" yourself.
If you're an OpenAI customer today you're obviously going to start asking yourself some questions, like "wait, why exactly should I be paying 30X more?". This is pretty transformational stuff, it fundamentally challenges the economics of the market.
It also potentially enables plenty of AI applications that were just completely unaffordable before. Say for instance that you want to build a service that helps people summarize books (random example). In AI parlance the average book is roughly 120,000 tokens (since a "token" is about 3/4 of a word and the average book is roughly 90,000 words). At OpenAI's prices, processing a single book would cost almost $2 since they change $15 per 1 million token. Deepseek's API however would cost only $0.07, which means your service can process about 30 books for $2 vs just 1 book with OpenAI: suddenly your book summarizing service is economically viable.
Or say you want to build a service that analyzes codebases for security vulnerabilities. A typical enterprise codebase might be 1 million lines of code, or roughly 4 million tokens. That would cost $60 with OpenAI versus just $2.20 with DeepSeek. At OpenAI's prices, doing daily security scans would cost $21,900 per year per codebase; with DeepSeek it's $803.
So basically it looks like the game has changed. All thanks to a Chinese company that just demonstrated how U.S. tech restrictions can backfire spectacularly - by forcing them to build more efficient solutions that they're now sharing with the world at 3% of OpenAI's prices. As the saying goes, sometimes pressure creates diamonds.
Last edited 4:23 PM · Jan 21, 2025 · 932.3K Views
That they're all having sex in a spontaneous orgy. It's... weird.
There is no such thing as a pineapple tree. That's an AI image.
Pineapples grow in an even more ridiculous way.
It also propels itself forward by discharging high velocity watermarks.
Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as 14.88 hp (11.10 kW) and also observed that for sustained activity, a work rate of about 1 hp (0.75 kW) per horse is consistent with agricultural advice from both the 19th and 20th centuries [...]
Sounds to me like the 1 hp unit is fair, after all.
I doubt it, that would be too much of a coincidence to have two people named Torvalds in one picture.
So this looks like the closer the server, the less efficient (more convoluted) the path to it is. Very cool.