Only little bits and pieces for projects I have so many backups I'd laugh if the LLM fucked it up, noticed they're heavily trained on python but near nothing on pascal. I use glm (deepseek, kimi etc) mostly for coding, I get banned just looking at chatgpt. I've abandoned google like a one way time capsule to 1997.
Quality is noticably worse for less used languages and frameworks.
I do use it as a better way to search for things that have too much context to fit into search engine keywords, but using it for any real engineering is always extremely underwhelming and infuriating.
I am convinced that the hyped AI bros either weren't doing very hard engineering to begin with, or they are lying out of their ass with how useful this bullshit is.
What I hate most about it though is the effect it has on your own brain when you get too used to it, it really makes you worse at thinking in general and being creative. I really fear the long term societal effect it will have on people as it becomes more widely used to replace all thinking that people are too lazy to do.
Critical support to the slop generators in telling windows users to break their installs
So many helper functions. All I said was, use a json file to create sql insert statements and have the date in timestamp format, expecting it to use To_timestamp. It created a helper function for parsing each datepart, then another to cast the result into To_timestamp.
I have found LLMs quite disappointing when writing code.
LLMs are useful for learning new libraries and scaffolding starter projects and maybe filling in a simple function body. But I rarely get purely generative output I would consider close to production-ready, even when it compiles or runs without error. To get non garbage at all, you must be very precise and ask it "implement [insert some formal data structure / algorithm / pattern] to do [specific task]" rather than asking it to produce code that does your thing. Even then, I find it more useful to ask for general strategies, related concepts, and some example code that would be useful to implement what I want.
All of this requires a pretty substantial skepticism of the output that people hyping up AI tools are completely lacking. Most people use these tools to avoid the difficult thinking necessary to solve a problem, so why would they put in that same level of thinking required to vet the output? And if you don't have enough knowledge of a framework, language, library, etc. to use it effectively or read / write the code yourself, you don't have the knowledge required to vet and maintain code produced by LLMs, let alone put it in production. I've had so many instances of LLMs writing code that would require a computer science education to understand why it is a bad idea. Anyone with that knowledge is better off implementing the thing directly instead of figuring out how to message their prompt or torture the output into something good.
LLMs repeatedly producing output you cannot or do not fully understand reenforces the view that your abilities are enhanced by the LLM. This, combined with the imposter syndrome that is rampant amongst devs is going to result in a lot of deferring to uncritically accepting bad code from LLMs.
Soon tons of mediocre devs will be producing mass quantities of code they're not capable or diligent enough to understand, resulting in huge, lumbering codebases full of bugs and bad design choices. In my career, the most common barrier to implementing anything or moving a project forward has been technical debt. LLMs are going to greatly increase the rate at which technical debt is produced and reduce the ability of people to tackle that technical debt, since they are no longer familiar with the codebase.
This phenomena is why I think LLM code gen is going to be a net productivity drain.
As always, the core problem with LLMs is not that they are frequently incorrect, it is that them being correct often enough lulls humans into foregoing their due diligence, typically in favor of having a proprietary product serve as a substitute for their critical thinking.
This is not unique to programmers, as I now see tons of people citing ChatGPT or Gemini as if they were authoritative sources on anything. We will see the effects of this in all aspects of society.
Here comes a highly controversial opinion.
Let me preface this with I'm anti AI, I wish Iran kept its mouth shut about destroying open's big facility and just did it. Seeing tech bros get the French revolution treatment would bring a smile to my face. And I avoid using it at all as best I can.
But I hit a breaking point yesterday with a not very popular Metroidvania I got on humble bundle called "kingdom shell". Great game with glorious atmosphere, but some very poor pacing and a few confusing puzzles. I got through most of them but one of the puzzles had me pulling my non existent hair out.
I tried normal searches, found one fairly comprehensive guide that was no help in this part specifically. I asked Gemini and I'll be damned if it didn't actually come up with a good answer.
I know my sample size of n=1 does not a p value of ≤.05 and I'm not changing my mind about using it more now. But in my one very specific instance it was a little help.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
