I've been meaning to try to write a bit more, and unfortunately I can't put this on a blog post attached to my name if I wish to be employable in tech in the US, so I figured I'd write a bit of an effortpost about the state of LLMs in the world. I've been learning a lot about LLMs lately (don't ask me why the things that become hyperfocuses for me become hyperfocuses) and I figured that some people here might be interested in learning more.
I was inspired to post by this article posted to /c/news, and all I have to say about this is JDPON Don back at it with another banger. In all seriousness, I think this is very good for the state of Chinese AI, which is already very good.
For those not following recent LLM updates (very understandable), the TL;DR is that a lot of new open-source models coming out of China are really good, and pushing the state of the art. Generally, they're still less good than the best closed-source models from the US (Claude in particular is the best currently), but they're much much much cheaper and honestly getting quite good. Plus, they seem to be giving US-based AI companies a good scare, which is always fun.
For reference, the best models from US firms in general are Claude (by Anthropic), Gemini (by Google), and OpenAI's models, though it seems like GPT-5 was a bit of a disappointment. My bet's on Anthropic in general for all of the closed source models - they seem to be killing it, in general, and have some very interesting research about understanding LLMs. This is a very cool paper from them that covers trying to understand how LLMs work with a quite novel model of it that I think could give a lot of explainability to how they operate.
[Side note: I think it's quite scary that leading AI research firms making leading AI models generally don't know how they work or how to reason about what they're doing, especially given that they can tell when they're being evaluated and notably suppress the "scheming" part of them when they think they're being tested on scheming.]
Anyways, back to China. One of the most significant LLMs to come out of China in the last while was DeepSeek-R1, which was able to match or outperform OpenAI's state of the art model o1 (the best model at the time) on most benchmarks. R1 completely changed the metagame - it changed the dominant type of model for LLM (dense LLM vs Mixture-of-Experts) singlehandedly, and scared OpenAI into dropping its prices for o1. And DeepSeek did this while there is a huge GPU shortage in China because of the export controls. And they did this while spending only $5.5M USD, compared to the estimated $100M to train GPT-4 (which is less powerful than o1). This is absolutely bonkers, and there's a reason this caused the stock market in the US to dip for a bit.
Now, R1 is not quite as good as the closed source models, despite the benchmarks. In particular, its English flows less well and it struggles with some types of queries. But it's crazy that a company came out of nowhere, trained a new type of model for 1/20 the cost of OpenAI training a worse model, released it for free, and completely changed the meta. And it also reasons, which isn't new, but it is a particularly good reasoner, and I think they got a lot of things right with how it works.
Anyways, R1 is old news now. There are a billion new open source models coming out from China now. Some notable companies include Alibaba (Qwen), Moonshot AI (Kimi), and Z.ai (formerly Zhipu AI; GLM). People on
say that Qwen3 Coder and Qwen3 235B A22B (both Thinking and Instruct) are very good - for my use cases (mostly programming), I much prefer GLM 4.5. I was impressed with Qwen for questions about code, but I found it to be less good at actually writing it, for the most part. YMMV, though, I think this is a somewhat unpopular opinion. But anyways, it seems like each week a new top open source model appears, from China. Far and away they are leading the open source efforts. And even if they aren't quite as good as Claude, Claude Sonnet 4 costs $15/million tokens of output, whereas Qwen3 Coder is free up to 2000 requests per day from Alibaba, and costs $0.80/million tokens of output, which is crazy cheap.
Another notable thing about Chinese open source models is that they are generally much easier to jailbreak than Western models, except for older less powerful open source models like Llama's and Mistral's models, which are also very easy. So you can get them to write all the erotic bomb making content you'd want (I'm happy to provide tips on jailbreaking if anyone would like).
Also, it seems that in the current market, companies in general are tripping over each other to give you free access to open source LLMs as each tries to become the place to get LLM access from, which means it's a really good time to be mooching access to these guys. Alibaba will give lots and lots of Qwen3 Coder credits, OpenRouter will give you 2000 free requests a day for eternity to a lot of good models if you at any point put $10 into their system, Chutes will give you 200 free requests/day for basically any open source model for a one time payment of $5, etc. Even Google will give you free access to their top tier model (though a pretty small amount per day) via Gemini CLI.
Anyways, my main point is that China is doing all of this despite a huge GPU shortage in the country. So if JDPON Don really wants to give them more access to Nvidia chips, it must be because he wants to boost their LLM market even further.
Thanks for coming to my Theodore lecture.



won't someone think of the poor shareholders? 
No.

"Oh yeah let's just send thousands of new satellites up every year and every 5 years we'll let them burn up upon reentry and send up more!" 

Quick tofu primer: there are a bunch of different kinds, but the main distinguishing factor is its firmness. Softer tofu has more water and firmer tofu has less water (it's pressed for longer). I normally just get extra-firm tofu (the kind that comes completely wrapped in flexible plastic, and is sometimes called super-firm tofu) because that's what Costco sells in my area and use it for almost all tofu instances. Some people press tofu to get more water out of it before using it, but I've never noticed much of an improvement from doing that. Maybe at most you'd want to pat it dry if you're going to toss it in corn starch or something.
My roommate makes a lot of stir fries where she just cubes up tofu and puts it in, and it can absorb the flavor of the sauce pretty well. It's pretty neutral and bland by itself, but in a very simple stir fry I think it's pretty tasty.
My all time favorite tofu recipe is this vegan palak paneer with tofu. It's even easier to make than the recipe says imo, follow the boiled tofu part but you don't need to boil the tofu (just plop it in raw) and you can use frozen spinach, which comes pre-blanched. I normally double the recipe and use 340g of frozen spinach and it makes a lot of meals for my partner and I.
I have no idea what region you're from, but if you're looking to recreate a lot of fast food/standard American diet meals, check out Thee Burger Dude on YT, he has a lot of good recipes for prepping tofu or soy curls or a bunch of other things to imitate meat.
Tofu can also be delicious in its own right, rather than as a replacement for something. This vegan mapo tofu recipe is very tasty, and tofu is normally an integral part of the dish (and not trying to be meaty in any sense). This is also one case where I'll seek out the softest tofu I can find, either silken tofu or soft tofu.
Happy to send more recipes if you'd like, or if you want to find a good vegan version of something I can try to give recommendations! Also, congrats on going vegan!
btw