123

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] Dojan@pawb.social 1 points 6 months ago

I miss the days when machine learning was fun. Poking together useless RNN models with a small dataset to make a digital Trump that talked about banging his daughter, end endless nipples flowing into America. Exploring the latent space between concepts.

[-] Feyd@programming.dev 1 points 6 months ago

It remains to be seen whether the advent of “agentic AIs,” designed to autonomously execute a series of tasks, will change the situation.

“Agentic AI is already reshaping the enterprise, and only those that move decisively — redesigning their architecture, teams, and ways of working — will unlock its full value,” the report reads.

"Devs are slower with and don't trust LLM based tools. Surely, letting these tools off the leash will somehow manifest their value instead of exacerbating their problems."

Absolute madness.

load more comments (1 replies)
[-] gigachad@piefed.social 1 points 6 months ago

I always need to laugh when I read "Agentic AI"

[-] arc99@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

[-] theterrasque@infosec.pub 1 points 6 months ago* (last edited 6 months ago)

I've used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can't do myself, but things I can't be arsed to sit down and actually do.

It's actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I've been thinking to add.

These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).

So it's absolutely useful and capable of writing good code.

[-] chicagohuman@lemmy.zip 1 points 6 months ago

This is the truth. It has tremendous value but it isn't a solution -- it's a tool. And if you don't know how to code or what good code looks like, then it is a tool you can't use!

[-] MrSulu@lemmy.ml 1 points 6 months ago

Perhaps it should read "All AI is over hyped, over done and we should be over it"

[-] Tollana1234567@lemmy.today 1 points 6 months ago

so is the profit it was foretold to generate, but it actually costs money than its actually generating.

[-] JackbyDev@programming.dev 0 points 6 months ago

The people talking about AI coding the most at my job are architects and it drives me insane.

[-] ceiphas@feddit.org 1 points 6 months ago

I am a software architect, an mainly usw it to refactor my own old code... But i am maybe not a typical architect...

[-] M0oP0o@mander.xyz 0 points 6 months ago

Wait, it was hyped? Not just ridiculed?

[-] ready_for_qa@programming.dev 0 points 6 months ago

These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.

My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.

I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.

So if a developer is just prompting "Do XYZ". It's no wonder they're spending more time untangling a random mess.

Another aspect is that everyone seems to always be working under the gun and they just don't have the time to figure out all the best practices and techniques on their own.

I think this should be considered when we hear things like this.

[-] korazail@lemmy.myserv.one 1 points 6 months ago

I have 3 questions, and I'm coming from a heavily AI-skeptic position, but am open:

  1. Do you believe that providing all that context, describing the existing patterns, creating an implementation plan, etc, allows the AI to both write better code and faster than if you just did it yourself? To me, this just seems like you have to re-write your technical documentation in prose each time you want to do something. You are saying this is better than 'Do XYZ', but how much twiddling of your existing codebase do you need to do before an AI can understand the business context of it? I don't currently do development on an existing codebase, but every time I try to get these tools to do something fairly simple from scratch, they just flail. Maybe I'm just not spending the hours to build my AI-parsable functional spec. Every time I've tried this, asking something as simple as (and paraphrased for brevity) "write an Asteroids clone using JavaScript and HTML 5 Canvas" results in a full failure, even with multiple retries chasing errors. I wrote something like that a few years ago to learn Javascript and it took me a day-ish to get something that mostly worked.

  2. Speaking of that context. Are you running your models locally, or do you have some cloud service? If you give your entire codebase to a 3rd party as context, how much of your company's secret sauce have you disclosed? I'd imagine most sane companies are doing something to make their models local, but we see regular news articles about how ChatGPT is training on user input and leaking sensitive data if you ask it nicely and I can't imagine all the pro-AI CEOs are aware of the risks here.

  3. How much pen-testing time are you spending on this code, error handling, edge cases, race conditions, data sanitation? An experienced dev understands these things innately, having fixed these kinds of issues in the past and knows the anti-patterns and how to avoid them. In all seriousness, I think this is going to be the thing that actually kills AI vibe coding, but it won't be fast enough. There will be tons of new exploits in what used to be solidly safe places. Your new web front-end? It has a really simple SQL injection attack. Your phone app? You can tell it your username is admin'joe@google.com and it'll let you order stuff for free since you're an admin.

I see a place for AI-generated code, for instant functions that do something blending simple and complex. "Hey claude, write a function to take a string and split it at the end of every sentence containing an uppercase A". I had to write weird functions like that constantly as a sysadmin, and transforming data seems like a thing an AI could help me accelerate. I just don't see that working on a larger scale, though, or trusting an AI enough to allow it to integrate a new function like that into an existing codebase.

[-] Lembot_0004@discuss.online 0 points 6 months ago

Industry? Yes, industry hires people who know how to do things needed by industry and who do nothing besides those things.

Programmers outside "industry" more often find themselves writing using the libraries they see for the first time and using languages they never thought to use. AI helps a lot here.

[-] SparroHawc@lemmy.zip 0 points 6 months ago

Except LLMs are absolutely terrible at working with a new, poorly documented library. Commonly-used, well-defined libraries? Sure! Working in an obscure language or an obscure framework? Good luck.

LLMs can surface information. It's perhaps the one place they're actually useful. They cannot reason in the same way a human programmer can, and all the big tech companies are trying to sell them on that basis.

[-] Lembot_0004@discuss.online 1 points 6 months ago* (last edited 6 months ago)

Well, don't use it with new, poorly documented libraries. That is a common sense rule: use the tool where it is useful.

Somehow many LLM criticizers just claim that LLMs are shit because they can't autonomously write code. Yes, they can't. But they can do many other useful things.

[-] COASTER1921@lemmy.ml 0 points 6 months ago

AI companies and investors are absolutely overhyping its capabilities, but if you haven't tried it before I'd strongly recommend doing so. For simple bash scripts and Python it almost always gets something workable first try, genuinely saving time.

AI LLMs are pretty terrible for nearly every other task I've tried. I suspect it's because the same amount of quality training data just doesn't exist for other fields.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 30 Sep 2025
123 points (96.9% liked)

Technology

84101 readers
225 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS