[-] moto@programming.dev 1 points 6 hours ago* (last edited 6 hours ago)

I feel ya. I've been told the same thing at my job as well.

I'd say "find a new gig" but honestly every place has been bitten by this hype train it seems like.

I've been doing a hybrid approach where I use the chatbot to do rudimentary things like label renames and then just do a lot of my work "the normal way". That way I log some token usage to say I use the tools and then bet that my output isn't going to be drastically different from my coworkers.

Then when the hype train dies we can all hopefully go back to doing what we do best. It's just a shitty period that I hope we can ride out.

[-] moto@programming.dev 6 points 7 hours ago

I generally agree with what the post is saying but this part

I think that we'll still be coding, but with some other layer, as LLMs are good with structured input, like programming languages. So we might need other programming languages than we have atm. Might we need different tools to evaluate LLMs' output to make it deterministic? Might we need a different approach for engineering to make it scalable? Might we need more?

I just don't see this happening to be honest. It's the same thing people keep claiming about "prompts replacing code"

Let's say you do make it deterministic. Then why do you need the LLM for it? You can just build a plain old compiler for it. Why add Anthropic or Open AI as an expensive middleman to your operations. There's already a lot of admin plugins that will set up entire routes and pages based off of a db model. The reason people don't purely work off of those is the world isn't modeled off of simple CRUD. There are so many edge cases and requirements that aren't easy to model in a sweeping generalization that you need some way of fine tuning that.

So if you scrap that you're back to "prompts as code". Which also sucks.

If you have a PR change that's breaking production and the only change is to a prompt

Make the popup background ~~red~~ blue

How the hell do you triage what went wrong? Do you revert and roll the dice that the LLM is gonna get it right? No one in their right mind would ever think this is okay in a production setting?

I don't want to say we'll never have a higher level extraction, but I don't think it'll be due to LLMs.

[-] moto@programming.dev 13 points 2 days ago

Yeah that's a real good point. I focused a lot on the short term issues of agentic slop but you're right the long term impact of this is going to be staggering.

That mental model is ultimately the more important part for the long-term health of the project. Coding is more an activity of communication between people; having an artifact that tells the computer what to do is almost an incidental side-effect of successful communication.

100%. Something I wanted to touch on in my post but cut because I couldn't weave it in well was more on the relationship with "The Problem" and "The Code". Pre-AI coding acted as a forcing function for the problem context. It's really hard to effectively build software without understanding what you're ultimately driving towards. With agentic coding we're stripping that forcing function out.

Institutional knowledge is already something that's been hard to quantify and value by C-suites and now you're ripping out the crucial mechanism for building that.

I see a lot of memes with people being like "in 5 years we're just gonna press yes and not understand what the agent is doing" and I keep thinking "why do people think this is funny?"

98
[-] moto@programming.dev 3 points 2 weeks ago

“See, I shouted at the computer and it did what I wanted in seconds instead of months! Why can’t you do that, nerds?”

It also doesn't say "no" like those nerds keep doing

AI writing unit tests? God help us all.

Haha, like I said in the foot note. If you don't like it, good! "This is not a good use case, let's scrap it and move on" is a perfect thing to say here.

It's the only thing I could think to try it with that I could easily audit results for. It mostly works. But there are a few things it does that causes me to scrap results

  • It loves to mock dependencies, even idempotent ones that don't connect to 3rd party dependencies
  • The verbiage for the test names doesn't speak towards requirements and is more like "it works" and often includes the word "should".
  • It sometimes likes to mock the Subject under Test, which is a huge no no

Often though I can keep some of it and just scrap the bad parts. And if it causes me problems I'm happy to quit it. It's not revolutionary. I'm just whelmed.

[-] moto@programming.dev 13 points 2 weeks ago

Thanks! It's both reassuring and sad that I'm not the only one going through this at work. Shit's crazy right now.

51

Long time lurker, first time poster. Don't know what it's been with my job but spurred this rant.

moto

joined 2 weeks ago