Chris Lattner used to be somebody. 🫠
it is a shitty compiler that is made to pass tests instead of not being shitty, but it is still a massive accomplishment. Generating a speed or code performance improvement in an existing compiler project would be more useful and impressive. LLMs are optimized to write readable simplest code that has poor performance. The reasoning frontier is making performant working code.
Before diving in, here are my main take-aways:
AI has moved beyond writing small snippets of code and is beginning to participate in engineering large systems.
AI is crossing from local code generation into global engineering participation: CCC maintains architecture across subsystems, not just functions.
CCC has an “LLVM-like” design (as expected): training on decades of compiler engineering produces compiler architectures shaped by that history.
Our legal apparatus frequently lags behind technology progress, and AI is pushing legal boundaries. Is proprietary software cooked?
Good software depends on judgment, communication, and clear abstraction. AI has amplified this.
AI coding is automation of implementation, so design and stewardship become more important.
Manual rewrites and translation work are becoming AI-native tasks, automating a large category of engineering effort.
AI, used right, should produce better software, provided humans actually spend more energy on architecture, design, and innovation.
Architecture documentation has become infrastructure as AI systems amplify well-structured knowledge while punishing undocumented systems.
I find it hard to square his experience with my own. Whenever I use LLMs, admittedly only the free versions because fuck paying these scrapers, they fail miserably to write code that even just runs, let alone makes use of good coding practices, when I ask for more than one specific example of code. Broaden the prompt even a little bit and the answers I'm getting are thought-starters at best and unusable garbage at worst.
Opus 4.6 and GPT 5.3 Codex produce some amazing results if you spend a lot of time scoping, speccing and testing each stage. Just throwing it over the wall with a low-effort prompt isn't going to get you anything that's very good. But time spent on the leadup and close monitoring of the results can give you production-ready code with little tech debt in it, extremely quickly and without a lot of money (energy) spent on inference.
Now downvote away.
Just today, I asked GPT5 mini for a little side project to give me a python function that returns the access rights to a given file/folder, using smbprotocol. For me, that read as a pretty concise ask, but the results were always using functions/attributes that didn't exist.
Any time I've asked for scripts, it's been flawless and way more than I asked for, usually with switches like --host, key, auth etc. But I'm using at least Sonnet if not Opus. I'd punch out a script for you with it but I have nothing that uses SMB in my network to test on.
If you're going to use GPT, you want 5.2-Coder at least, and honestly I'm not as impressed with OpenAI's products as other people seem to be.
Lobste.rs
RSS Feed of lobste.rs