117
I Am Happier Writing Code by Hand (www.abhinavomprakash.com)
you are viewing a single comment's thread
view the rest of the comments
[-] FishFace@piefed.social 0 points 11 hours ago

No, AI results can be quite good, especially if your internal documentation is poor and disorganised. Fundamentally you cannot trust it, but in software we have the luxury of being able to check solutions cheaply (usually).

Our internal search at work is dogshit, but the internal LLM can turn up things quicker. Do I wish they'd improve the internal search? Yes. Am I going to make that my problem by continuing to use a slower tool? No.

[-] Solumbran@lemmy.world 3 points 10 hours ago

"It's quite good" "you cannot trust it"

What is your definition of good?

What a recipe for disaster...

[-] FishFace@piefed.social 5 points 10 hours ago

Something that you can't trust can be good if it is possible to verify without significant penalties, as long as its accuracy is sufficiently high.

In my country, you would never just trust the weather forecast if your life depended on it not raining: if you book an open-air event more than a week in advance, the plan cannot rely on the weather being fair, because the long-range forecast is not that reliable. But this is OK if the cost of inaccuracy is that you take an umbrella with you, or change plans last-minute and stay in. It's not OK you don't have an umbrella, or staying in would cost you dearly.

In software development, if you ask a question like, "how do I fix this error message from the CI system", and it comes back with some answer, you can just try it out. If it doesn't work, oh well, you wasted a few minutes of your time and some minutes on the CI nodes. If it does, hurrah!

Given that, in practice the alternative is often spending hours digging through internal posts, messaging other people (disrupting their time) who don't know the answer, only to end up with a hack workaround, this is actually well worth a go - at my place of work. In fact, let's compare the AI process to the internal search one - I search for the error message and the top 5 results are all completely unrelated. This isn't much different to the AI returning a hallucinated solution - the difference is that to check the hallucinated solution, I have to run the command it gives (or whatever), whereas to check the search results, I have to read the posts. There is a higher time cost to checking the AI solution - it probably only takes 30 seconds to click a link, load the page, and read enough of it to see it's wrong. Whereas the hallucinated solution, as I said, will take a few minutes (of my time actually typing commands, watching it run, looking at results - not waiting for CI to complete which I can spend doing something else). So that is, roughly, the ratio for how much better the LLM needs to be than search (in terms of % good results).

Like I said, I wish that the state of our internal search and internal documentation were better, but it ain't.

this post was submitted on 08 Feb 2026
117 points (96.1% liked)

Programming

25415 readers
467 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS