33
We mourn our craft (nolanlawson.com)
you are viewing a single comment's thread
view the rest of the comments
[-] Thorry@feddit.org 24 points 1 week ago* (last edited 1 week ago)

Writing code with an LLM is often actually less productive than writing without.

Sure for some small tasks it might poop out an answer real quick and it may look like something that's good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it's very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.

Then there's all the cases where the LLM messes up and doesn't give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it's very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like "Summarizing conversation", which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.

There's also the issue where an LLM simply can't do a large complex task. They've tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.

The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn't know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.

There's also so much more to being a good software developer than just writing the code. The LLM can't do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that's a muscle that needs flexing, or skills rust and go away.

And now they've poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone down. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.

[-] astronaut_sloth@mander.xyz 4 points 1 week ago

People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.

It helps me code faster, but I really only outsource boilerplate to an LLM. I will say it also helps with learning the syntax for libraries I'm unfamiliar with just in that I don't have to go through several pages of documentation to get the answers I need in the moment. The speed-up is modest and nowhere near the claims of vibe coders.

[-] Glitchvid@lemmy.world 5 points 1 week ago

Because this comes up so often, I have to ask, specifically what kind of boilerplate? Examples would be great.

[-] Meron35@lemmy.world 1 points 1 week ago

IIRC there were some polls for how helpful LLMs were by language/professions, and data science languages/workflows consistently rated LLMs very highly. Which makes sense, because the main steps of 1) data cleaning, 2) estimation and 3) presenting results all have lots of boilerplate.

Data cleaning really just revolves around a few core functions such as filter, select, and join; joins in particular can get very complicated to keep track of for big data.

For estimation, the more complicated models all require lots of hyperparameters, all of which need to be set up (instantiated if you use an OOP implementation like Python) and looped over some validation set. Even with dedicated high level libraries like scikit, there is still a lot of boilerplate.

Presentation usually consists of visualisation and cleaning up results for tables. Professional visualisations require titles, axis labels, reformatted axis labels etc, which is 4-5 lines of boilerplate minimum. Tables are usually catted out to HTML or LaTeX, both of which are notorious for boilerplate. This isn't even getting into fancier frontends/dashboards, which is its own can of worms.

The fact that these steps tend to be quite bespoke for every dataset also means that they couldn't be easily automated by existing autocomplete, e.g. formatting SYS_BP to "Systolic Blood Pressure (mmHg)" for the graphs/tables.

load more comments (3 replies)
load more comments (3 replies)
load more comments (3 replies)
this post was submitted on 07 Feb 2026
33 points (70.4% liked)

Programming

25539 readers
1050 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS