112
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 Apr 2026
112 points (99.1% liked)
technology
24320 readers
370 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS

I have found LLMs quite disappointing when writing code.
LLMs are useful for learning new libraries and scaffolding starter projects and maybe filling in a simple function body. But I rarely get purely generative output I would consider close to production-ready, even when it compiles or runs without error. To get non garbage at all, you must be very precise and ask it "implement [insert some formal data structure / algorithm / pattern] to do [specific task]" rather than asking it to produce code that does your thing. Even then, I find it more useful to ask for general strategies, related concepts, and some example code that would be useful to implement what I want.
All of this requires a pretty substantial skepticism of the output that people hyping up AI tools are completely lacking. Most people use these tools to avoid the difficult thinking necessary to solve a problem, so why would they put in that same level of thinking required to vet the output? And if you don't have enough knowledge of a framework, language, library, etc. to use it effectively or read / write the code yourself, you don't have the knowledge required to vet and maintain code produced by LLMs, let alone put it in production. I've had so many instances of LLMs writing code that would require a computer science education to understand why it is a bad idea. Anyone with that knowledge is better off implementing the thing directly instead of figuring out how to message their prompt or torture the output into something good.
LLMs repeatedly producing output you cannot or do not fully understand reenforces the view that your abilities are enhanced by the LLM. This, combined with the imposter syndrome that is rampant amongst devs is going to result in a lot of deferring to uncritically accepting bad code from LLMs.
Soon tons of mediocre devs will be producing mass quantities of code they're not capable or diligent enough to understand, resulting in huge, lumbering codebases full of bugs and bad design choices. In my career, the most common barrier to implementing anything or moving a project forward has been technical debt. LLMs are going to greatly increase the rate at which technical debt is produced and reduce the ability of people to tackle that technical debt, since they are no longer familiar with the codebase.
This phenomena is why I think LLM code gen is going to be a net productivity drain.
As always, the core problem with LLMs is not that they are frequently incorrect, it is that them being correct often enough lulls humans into foregoing their due diligence, typically in favor of having a proprietary product serve as a substitute for their critical thinking.
This is not unique to programmers, as I now see tons of people citing ChatGPT or Gemini as if they were authoritative sources on anything. We will see the effects of this in all aspects of society.