85

Lemmings, I was hoping you could help me sort this one out: LLM's are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they're hallucinating?

Disclaimer: I'm a full time senior dev using the shit out of LLM's, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don't see "AI" taking my job, because I think that LLM's have already peaked, they're just tweaking minor details now.

Please don't ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA's.

Please don't kill me

you are viewing a single comment's thread
view the rest of the comments
[-] dangling_cat@piefed.blahaj.zone 48 points 1 week ago

Both are true.

  1. Yes, they hallucinate. For coding, especially when they don’t have the latest documentation, they just invent APIs and methods that don’t exist.
  2. They also take jobs. They pretty much eliminate entry-level programmers (making the same mistakes while being cheaper and faster).
  3. AI-generated code bases are not maintainable in the long run. They don’t reliably reuse methods, only fix the surface bugs, not fundamental problems, causing code base bloating and, as we all know, more code == more bugs.
  4. Management uses Claude code for their small projects and is convinced that it can replace all programmers for all projects, which is a bias they don’t recognize.

Is it a bubble? Yes. Is it a fluke? Welllllllll, not entirely. It does increase productivity, given enough training, learning its advantages and limitations.

[-] BatmanAoD@programming.dev 1 points 6 days ago

making the same mistakes

This is key, and I feel like a lot of people arguing about "hallucinations" don't recognize it. Human memory is extremely fallible; we "hallucinate" wrong information all the time. If you've ever forgotten the name of a method, or whether that method even exists in the API you're using, and started typing it out to see if your autocompleter recognizes it, you've just "hallucinated" in the same way an LLM would. The solution isn't to require programmers to have perfect memory, but to have easily-searchable reference information (e.g. the ability to actually read or search through a class's method signatures) and tight feedback loops (e.g. the autocompleter and other LSP/IDE features).

[-] VoterFrog@lemmy.world 1 points 6 days ago* (last edited 5 days ago)

Agents now can run compilation and testing on their own so the hallucination problem is largely irrelevant. An LLM that hallucinates an API quickly finds out that it fails to work and is forced to retrieve the real API and fix the errors. So it really doesn't matter anymore. The code you wind up with will ultimately work.

The only real question you need to answer yourself is whether or not the tests it generates are appropriate. Then maybe spend some time refactoring for clarity and extensibility.

[-] BatmanAoD@programming.dev 1 points 4 days ago

Exactly: that's tight feedback loops. Agents are also capable of reading docs and source code prior to generating new function calls, so they benefit from both of the solutions that I said people benefit from.

[-] tyler@programming.dev 1 points 5 days ago

An LLM that hallucinates an API quickly finds out that it falls to work and is forced to retrieve the real API and fix the errors.

and that can result it in just fixing the errors, but not actually solving the problem, for example if the unit tests it writes afterwards test the wrong thing.

[-] VoterFrog@lemmy.world 2 points 5 days ago

You're not going to find me advocating for letting the code go into production without review.

Still, that's a different class of problem than the LLM hallucinating a fake API. That's a largely outdated criticism of the tools we have today.

[-] BatmanAoD@programming.dev 1 points 6 days ago

As an even more obvious example: students who put wrong answers on tests are "hallucinating" by the definition we apply to LLMs.

[-] Feyd@programming.dev 29 points 1 week ago

It does increase productivity, given enough training, learning its advantages and limitations.

People keep saying this based on gut feeling, but the only study I've seen showed that even experienced devs that thought they were faster were actually slower.

[-] ulterno@programming.dev 0 points 6 days ago

Well, it did let me make fake SQL queries out of the JSON query I gave it, without me having to learn SQL.
Of course, I didn't actually use the query in the code, just added it in a comment for a function, to give an idea to those that didn't know JSON queries, of what the function did.

I treat it for what it is. A "language" model.
It does language, not logic. So I don't try to make it do logic.

There were a few times I considered using it for code completion for things that are close to copy paste, but not close enough that it could be done via bash. For that, I wished I had some clang endpoint that I could then use to get a tokenised representation of code, to then script with.
But then I just made a little C program that did 90% of the job and then I did the remaining 10% manually. And it was 100% deterministic, so I didn't have to proof-read the generated code.

this post was submitted on 13 Dec 2025
85 points (85.1% liked)

Programming

23969 readers
143 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS