33
How Vibe Coding Is Killing Open Source
(hackaday.com)
This is a most excellent place for technology news and articles.
I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.
GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.
So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.
They don't need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.
I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.
It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn't have configs/docs/optimizations for LLMs, or you haven't figured out a decent workflow, then they'll be underwhelming and significantly less productive.
(I know I'll get downvoted just for describing my experience and observations here, but I don't care. I miss the pre-LLM days very much, but they're gone, whether we like it or not.)
This sounds a lot like every framework, 20 years ago you could have written that about rails.
Which IMO makes sense because if code isn't solving anything interesting then you can dynamically generate it relatively easily, and it's easy to get demos up and running, but neither can help you solve interesting problems.
Which isn't to say it won't have a major impact on software for decades, especially low-effort apps.
Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?
Oh, sorry, I didn't mean to imply that consumer-grade hardware has gotten more efficient. I wouldn't really know about that, but I assume most of the focus is on data centers.
Those were two separate thoughts:
Can you provide evidence the "more efficient" models are actually more efficient for vibe coding? Results would be the best measure.
It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).