85

Lemmings, I was hoping you could help me sort this one out: LLM's are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they're hallucinating?

Disclaimer: I'm a full time senior dev using the shit out of LLM's, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don't see "AI" taking my job, because I think that LLM's have already peaked, they're just tweaking minor details now.

Please don't ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA's.

Please don't kill me

top 50 comments
sorted by: hot top controversial new old
[-] Quetzalcutlass@lemmy.world 77 points 6 days ago* (last edited 6 days ago)

It takes jobs because executives push it hoping to save six figures per replaced employee, not because it's actually better. The downsides of AI-written code (that it turns a codebase into an unmaintainable mess whose own "authors" won't have a solid mental model of it since they didn't actually write it) won't show up immediately, only when something breaks or needs to be changed.

It's like outsourcing - it looks promising and you think you'll save a ton of money, until months or years later when the tech debt comes due and nobody in the company knows how to fix it. Even if the code was absolutely flawless, you still need to know it to maintain it.

[-] idriss@lemmy.ml 3 points 6 days ago

That's a solid point. Even if it looks great (most of the time not). I try to build small predictable parts, refactoring, ... Even with all precaution, I find tech debt hidden somewhere weeks and months later.

I use LLMs extensively for work as people think we are faster now but try to avoid letting LLMs write anything for personal projects.

load more comments (19 replies)
[-] dangling_cat@piefed.blahaj.zone 48 points 6 days ago

Both are true.

  1. Yes, they hallucinate. For coding, especially when they don’t have the latest documentation, they just invent APIs and methods that don’t exist.
  2. They also take jobs. They pretty much eliminate entry-level programmers (making the same mistakes while being cheaper and faster).
  3. AI-generated code bases are not maintainable in the long run. They don’t reliably reuse methods, only fix the surface bugs, not fundamental problems, causing code base bloating and, as we all know, more code == more bugs.
  4. Management uses Claude code for their small projects and is convinced that it can replace all programmers for all projects, which is a bias they don’t recognize.

Is it a bubble? Yes. Is it a fluke? Welllllllll, not entirely. It does increase productivity, given enough training, learning its advantages and limitations.

[-] Feyd@programming.dev 29 points 6 days ago

It does increase productivity, given enough training, learning its advantages and limitations.

People keep saying this based on gut feeling, but the only study I've seen showed that even experienced devs that thought they were faster were actually slower.

load more comments (8 replies)
load more comments (9 replies)
[-] entwine@programming.dev 36 points 6 days ago* (last edited 6 days ago)

I'm a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed

I'm not saying you're full of crap, but I smell a lot of crap. Who talks like this unironically? This is like hearing someone call somebody else a "rockstar" or "ninja".

If you really are breaking necks with how fast you're coding, surely you must have used this newfound ability to finally work on those side projects everyone has been meaning to work. Those wouldn't be covered under NDA.

Edit: just to be clear, I'm not anti-LLMs. I've used them myself in a few different forms, and although I didn't find them useful for my work, I can see how they could be helpful for certain types of work. I definitely don't see them replacing human engineers.

[-] Tollana1234567@lemmy.today 5 points 6 days ago

sounds like, how can he say are they taking jobs, then go on say hes doing wonders with using LLM.

[-] XM34@feddit.org 3 points 5 days ago

Idk, there's a lot of people at my job talking like this. LLMs really do help speed things up. They do so at a massive cost in code and software quality, but they do speed things up. In my experience, coding right now isn't about writing legible and maintainable code. It's about deciding which parts of your codebase you want to be legible and maintainable and therefore LLM free.

I for one let AI write pretty much all of my unit tests. They're not pretty, but they get the job done and still indicate when I'm accidentally changing behaviour in a part of the codebase I didn't mean to. But I keep the service layer as AI free as possible. Because that's where the important code is located.

[-] PokerChips@programming.dev 1 points 5 days ago

Is your code open source and if not, are you just handing your code over to an AI for scraping?

[-] andioop@programming.dev 8 points 5 days ago* (last edited 5 days ago)

I think it's both.

It sits at the fast and cheap end of "pick three: fast, good, and cheap" and society is trending towards "fast and cheap" to the exclusion of "good" to the point it is getting harder and harder to find "good" at all sometimes.

People who care about the "good" bit are upset, people who want to see stock line go up in the short term without caring about long term consequences keep riding the "always pick fast and cheap" and are impressed by the prototypes LLMs can pump out. So devs get fired because LLMs are faster and cheaper, even if they hallucinate and cause tons of tech debt. Move fast and break things.

Some devs that keep their jobs might use LLMs. Maybe they accurately assessed what they are trying to outsource to LLMs is so low-skill that even something that does not hit "good" could do it right (and that when it screws up they could verify the mistake and fix it quickly); so they only have to care about "fast and cheap". Maybe they just want the convenience and are prioritizing "fast and cheap" when they really do need to consider "good". Bad devs exist too and I am sure we have all seen incompetent people stay employed despite the trouble they cause for others.

So as much as this looked at first, to me, like the thing where fascists simultaneously portray opponents as weak (pathetic! we deserve to triumph over them and beat their faces in for their weakness) and strong (big threat, must defeat!), I think that's not exactly what anti-AI folks are doing here. Not doublethink but just seeing everyone pick "fast and cheap" and noticing its consequences. Which does easily map onto portraying AI as weak, pointing out all the mistakes it makes and not replacing humans well; while also portraying it as strong, pointing out that people keep trying to replace humans with AI and that it's being aggressively pushed at us. There are other things in real life that map onto a simultaneous portrayal as weak and strong: the roach. A baby taking its first steps can accidentally crush a roach, hell if the baby fell on many roaches the roaches all die (weak), but it's also super hard to end an infestation of them (strong). It is worth checking for doublethink when you see the pattern of "simultaneously weak and strong," but that is also just how an honest evaluation of a particular situation can end up.

[-] codeinabox@programming.dev 21 points 6 days ago

Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

Perhaps the biggest reason AI won't take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you'd be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

[-] melfie@lemy.lol 2 points 5 days ago* (last edited 5 days ago)

This article / talk is quite illuminating. I’ve seen studies indicating that AI coding agents improve productivity by 15-20% in the aggregate, which tracks with my own experience. It’s a solid productivity boost when used correctly, clearly falling in the “centaur”category in my own experience at least. However, all the hate around it, my own included, stems from the “reverse-centaur” aspirations around it. The companies developing these tools aren’t in it to make a reasonable profit while delivering modest productivity gains. They are in it to spin a false narrative that these tools can replace 9/10 engineers in order to drive their own overly inflated valuations, knowing damn well this is not the case, but not caring because they don’t plan to be the ones holding the bag in the end (taxpayers will be the bag-holders when they get bailed out).

[-] dream_weasel@sh.itjust.works 6 points 5 days ago

It's pretty unbeatable to use LLMs for fast prototyping and query generation, but "vibe coding" is not something just anybody can (or should) do.

It seems to me that the world of LLMs gives back at the quality you put in. If you ask it about eating rocks and putting glue on pizza then yes it's a giant waste of money. If you can form a coherent question that you have a feeling for what the answer is like (especially related to programming) it's easily worth the hype. Now if you are using it blindly to build or audit your code base that falls into the first category of "you should not be using this tool".

Unfortunately, my view before and after the emergence of LLMs is that most people are just not that bright. Unique and valuable, sure, but when it comes to expertise it just isnt as common as the council of armchairs might lead you to believe.

[-] Logical@lemmy.world 2 points 5 days ago

I mostly agree with you, but I still don't think it's "worth the hype" even if you use it responsibly, since the hype is that it is somehow going to replace software devs (and other jobs), which is precisely what it can't do. If you're aware enough of its limitations to be using it as a productivity tool, as opposed to treating it as some kind of independent, thinking "expert", then you're already recognizing that it does not live up to anywhere near the hype that is being pushed by the big AI companies.

[-] Flamekebab@piefed.social 13 points 6 days ago

I'm perplexed as to why there's so much advertising and pushing for AI. If it was so good it would sell itself. Instead it's just sort of a bit shit. Not completely useless but in need of babysitting.

If I ask it to do something there's about a 30% chance that it made up the method/specifics of an API call based on lots of other similar things. No, .toxml() doesn't exist for this object. No, I know that .toXml() exists but it works differently from other libraries.

I can make it just about muddle through but mostly I find it handy for time intensive grunt work (convert this variable to the format used by another language, add another argparser argument for the function's new argument, etc..).

It's just a bit naff. It cannot be relied on to deliver consistent results and if a computer can't be consistent then what bloody good is it?

load more comments (9 replies)
[-] AnitaAmandaHuginskis@lemmy.world 12 points 6 days ago

The key is how you use LLMs and which LLMs you use for what.

If you know how to make use of them properly, know their strengths, weaknesses, and limitations, LLMs are an incredibly useful tool that sucks up productivity from other people (and their jobs) and focus productivity on you, so to speak.

If you do not know how to make use of them -- then yes, they suck. For you.

It's not really that much different from any other tool. Know how to use version control? If not it does not make you a bad dev per se. If yes, it probably makes you a bit more organized.

Same with IDEs, using search engines, being able to read documentation properly. All of that is not required but knowing how to make use of such tools and having the skills add up.

Same with LLMs.

[-] Ledivin@lemmy.world 12 points 6 days ago

AI hallucinates constantly, that's why you still have a job - someone has to know what they're doing to sort out the wheat from the chaff.

It's also taking a ton of our entry-level jobs, because you can do the work you used to do and the work of the junior devs you used to have without breaking a sweat.

load more comments (10 replies)
[-] litchralee@sh.itjust.works 10 points 6 days ago* (last edited 6 days ago)

To many of life's either-or questions, we often struggle when the answer is: yes. That is to say, two things can hold true at the same time: 1) LLMs can result in job redundancies, and 2) LLMs hallucinate results.

But if we just stopped the analysis there, we wouldn't have learned anything. To use this reality to terminate any additional critical thinking is, IMO, wholly inappropriate for solving modern challenges, and so we must look into the exact contours of how true these statements are.

To wit, LLM-induced job redundancies could come from skills which have been displaced by the things LLMs can do well. For example, typists lost their jobs when businesspeople were expected to operate a typewriter on their own. And when word processing software came into existence for the personal computer, a lot of typewriter companies folded or were consolidated. In the case of LLMs, consider that people do use them to proofread letters for spelling and grammar.

Technologically, we've had spell-check software for a while, but grammar was harder. In turn, an industry appeared somewhere in the late 2000s or early 2010s to develop grammar software. Imagine how the software devs at these companies (eg Grammarly) might be in a precarious situation, if an LLM can do the same work. At least with grammar checking, even the best grammar software still struggles with some of the more esoteric English sentence constructions, so if an LLM isn't 100% perfect, that's still acceptable. I can absolutely see the fortunes of grammar software companies suffering due to LLMs, and that means those software devs are indeed threatened by what LLMs can do.

For the second statement, it is trivial to find examples of LLMs hallucinating, sometimes spectacularly or seemingly ironic (although an LLM would be hard-pressed to simulate the intention of irony, I would think). In some fields, such hallucinations are career-limiting moves for the user, such as if an LLM was used to advise on pharmaceutical dosage, or used to draft a bogus legal appeal and the judge is not amused. This is very much a FAFO situation, where somehow the AI/LLM companies are burdened with none of the risk and all of the upside. It's like how autonomous driving automotive companies are somehow allowed to do public road tests of their beta-quality designs, but the liability for crashes still befalls the poor sod seated behind the wheel. Thoss companies just keep yapping about how those crashes are all "human error" and "an autonomous car is still safer".

But I digress.

My point is that LLMs have quite a lot of capabilities, and people make a serious mistake when they assume its incompetence in one capacity reflects its competency in another. This is not unlike how humans assess other humans, such as how a record-setting F1 driver would probably be a very good chauffeur for a limousine company. But whereas humans have patterns that suggest they might be good (or bad) at something, LLMs are a creature unlike anything else.

I personally am not bullish on additional LLM improvements, and think the next big push will require additional academic research, being nowhere near commercialization. But even I have to recognize that some very specific tasks are decent using today's availabile LLMs. I just don't think that's good enough for me to consider using them, given their subscription costs, the possibility of becoming dependent, and being too niche.

[-] henfredemars@infosec.pub 4 points 6 days ago

It’s rare to see such a complete and well-thought-out response anywhere on the Internet. Great job in capturing the nuance. It’s a powerful and often-misused tool.

[-] Horrabin@programming.dev 2 points 4 days ago

Good luck to the people that thinks that a ~~wordjoiner~~ LLM will replace real human intelligence. 🍿

[-] count_dongulus@lemmy.world 8 points 6 days ago

Oh you mean like how AI for driving changed cars so nobody drives themselves any more?

load more comments (1 replies)
[-] rozodru@pie.andmc.ca 7 points 6 days ago

Since I deal with this first hand with clients I will tell you it doesn't have to be good to be embraced. as far as the managers and CEOs know, they don't know. LLMs with vibe coders CAN and routinely DO produce something now if that something is good and works is another thing and in most cases it doesn't work in the long term.

Managers and up only see the short term and in the short term vibe coding and LLMs work. in the long term they don't. they break, they don't scale, they're full of exploits. But short term? saving money in the short term? that's all they care about right now until they don't.

[-] altphoto@lemmy.today 1 points 4 days ago

At some point someone invented the router to cut wood. I have one in my shop and it has yet to produce a beautiful ornate window.

Like a router, an llm is just a tool. the problem is that llms report back to their masters. They also know too little about your specific problem. And they're not good at math or engineering, they just guess. So you have to craft your questions carefully and around things you know so you can fact check. Otherwise they're awesome.

[-] technocrit@lemmy.dbzer0.com 2 points 5 days ago* (last edited 5 days ago)

Here's how I might resolve this supposed dichotomy:

  • "AI" doesn't actually exist.
    • You might be using technologies that are called "AI" but there is no actual "intelligence" there. For example, as OP mentions, LLMs are extremely limited and not actually "intelligent".
  • Since "AI" doesn't actually exist, since there's no objective test, etc... "AI" can be anything and do anything.
  • So at the extremes we get the "AI" God and "AI" Devil
    • "AI" God - S/he saves the economy, liberates us from drudgery, creates great art, saves us from China (\s), heralds the singularity, etc.
    • "AI" Devil - S/he hallucinates, steals jobs, destroys the environment, is a tool of the MIC, murders artists, is how China will destroy us (\s), wastes of time and resources, is a scam, causes apocalypses, etc.

Since there's no objective meaning from the start, there's no coherence or reason behind the wild conclusions are the bottom. When we talk about "AI", we're talking about a wide variety of technologies with varying values in various contexts. I think there are some real shitty people/products but also some hopefully useful technologies. So depending on the situation I might have a different opinion.

[-] BatmanAoD@programming.dev 3 points 5 days ago

This seems like it doesn't really answer OP's question, which is specifically about the practical uses or misuses of LLMs, not about whether the "I" in "AI" is really "intelligent" or not.

load more comments (1 replies)
[-] slowcakes@programming.dev 1 points 5 days ago

It might not have taken your job, but jobs has been taken. Some one at the office was sharing in chat professional photo of him self by just cropping his face and giving it to Gemini. So no need to hire professional photographer anymore (at least not as much).

And you are naive to think that the economy doesn't effect you having a job or not, who is going to pay you? Will your workplace be able to compete, are your customers still in business or are their customers still in business, or will they even use he same providers.. there a whole chain of effects that is going to happen, when AI actually gets good enough (and it will), to do stuff good enough

[-] NotMyOldRedditName@lemmy.world 1 points 5 days ago* (last edited 5 days ago)

I've used it to help me make some website stuff as a mobile dev. It's definitely not the best webpage and I'll probably want to redo it at some point, but for now it works.

That cost a freelancer a small job or would have taken exceptionally longer than it did. But I really don't have a big interest in web dev so I probably would have ended up hiring someone.

The amount of stuff it gets wrong is still enormous. I can't imagine what someone with zero programming skills would end up with if they only used it. It'd be so god awful.

[-] bluemoon@piefed.social 3 points 6 days ago

that's you doing exactly the thing told in the 'reverse centaur' stage talk?

load more comments
view more: next ›
this post was submitted on 13 Dec 2025
85 points (85.1% liked)

Programming

23969 readers
610 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS