245

...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

top 50 comments
sorted by: hot top controversial new old
[-] Jayjader@jlai.lu 1 points 18 hours ago

I haven't tried any Anthropic models personally.

So far, between the free online chats by OpenAI and DeepSeek, and the smaller models I've run on my own machine, the most useful things I have gotten from it were to treat it as an overeager student that lacks the first-hand experience needed to see the big picture, asking it questions that I'm pretty sure I already know the answer to and seeing if 1) it "understands" what I'm getting at and 2) it can surprise me with a viewpoint I hadn't thought of before.

Using them to double-check my own ideas seems to be marginally useful, especially when there's no qualified human being whose attention I can borrow. Using them as a sort of semantic web search can sometimes get me what I'm looking for faster than Google. If anything, they're an opportunity to exercise critical thinking; if I can tell where it's getting things wrong I can be fairly confident that my own understanding of the problem/subject is pretty solid.

Vibe coding, though? I have yet to see it work out. Maybe as some starting slop so that I can get to work refactoring code (and get the ideas flowing) instead of staring at a blank file.

[-] rosco385@lemmy.wtf 17 points 2 days ago

The solutions it generated were almost write every time

Did you vibe code this post? 😂

[-] TBi@lemmy.world 5 points 1 day ago

You just didn’t use the right prompts!!!!

/s

[-] Flames5123@sh.itjust.works 13 points 2 days ago

I have a full pro model for Kiro at work. It does actually work, but we have custom MCP servers for all the internal tools, context on how to use these tools, style guidelines, etc. and then on top of that we have a lot of AI context files in the code base to help the AI understand the code base and make the correct changes.

I’ve been using it on a side project and it works if you know how to constrain it. It does get things wrong a lot. But the big thing about it is doing spec driven development where you give it a write up and it makes a requirements doc and a design doc with a lot of correctness properties in them to follow when generating and making the tasks.

I don’t believe people can vibe code unless they can actually code. It’s a whole different way of coding. I still manually edit what it does a lot.

A lot of people explain it like it’s a brand new junior developer. You need to give it as much context as possible, tell it to exactly what you want, tell it what you don’t want, tell it why, etc. and it still may not listen exactly.

[-] zbyte64@awful.systems 14 points 2 days ago* (last edited 2 days ago)

In my experience there are three ways to be successful with this tool:

  • write something that already exists so it doesn't need to think
  • do all the thinking for it upfront (hello waterfall development)
  • work in very small iterations that doesn't require any leaps of logic. Don't reprompt when it gets something wrong, instead reshape the code so it can only get it right

The issue with debugging is that it doesn't actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.

Edit: one of my workflows that I had success with is as follows:

  • write a gherkin feature file describing desired functionality, maybe have the LLM create multiple scenarios after I defined one to copy from
  • tell the LLM to write tests using those feature files, does an okay job but needs help making tests run in parallel.
  • if the feature is simple, ask the LLM to make a plan and review it
  • if the feature is complex then stub out the implementation in code and add TODOs, then direct the LLM to plan. Giving explicit goals in the code itself reduces token consumption and yield better plans
[-] spartanatreyu@programming.dev 3 points 2 days ago

write something that already exists so it doesn’t need to think

If something already exists, it shouldn't need to be rewritten.

Doing otherwise is a sign that something has gone wrong.

That was the case before LLMs and it is still the case today.

[-] CCMan1701A@startrek.website 4 points 2 days ago

What they mean is rewrite something that has a LICENSE my company can't use.

[-] spartanatreyu@programming.dev 2 points 1 day ago

If the rewrite is based on something which has a license that your company can't use, then the rewrite likely can't be used either

[-] CCMan1701A@startrek.website 1 points 10 hours ago

I'm pretty sure if code is AI generated it's likely considered original, but I'm not a lawyer by any stretch.

load more comments (1 replies)
[-] Feyd@programming.dev 174 points 3 days ago

producing subtly broken junk

The difference between you and people that say it's amazing is that you are capable of discerning this reality.

[-] OwOarchist@pawb.social 57 points 3 days ago

What I don't get, though, is how the vibe code bros can't discern this reality.

How can they sit there and not see that their vibe-coded app just doesn't do what they wanted it to do? Eventually, you've got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn't work?

[-] Lumelore@lemmy.blahaj.zone 37 points 3 days ago* (last edited 3 days ago)

Vibe code bros aren't real programmers. They're business people, not computer people. Even if they have a CS degree, they only got that because they think it'll get them more money. They lack passion and they don't care about understanding anything. They probably don't even care about what they're generating beyond its potential to be used in a grift.

I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it'll be more lucrative for them but since they only care about money they didn't bother to actually learn the material especially since they could just vibe code through everything.

[-] b_n@sh.itjust.works 13 points 3 days ago

So much this.

After working in tech companies for the last 10 years I've noticed the difference between people that "generate code" and those that engineer code.

My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to "the generators".

My hope is that this is one of those "short term gain, long term pain" things that might self correct in a couple of years 🤞.

load more comments (1 replies)
[-] Feyd@programming.dev 37 points 3 days ago

They're the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

load more comments (9 replies)
load more comments (1 replies)
[-] thirstyhyena@lemmy.world 7 points 2 days ago

I recently started using Pro to debug a problem I couldn't solve. The one thing I need from it is an extra insight, a second opinion (because I'm the only developer), and it allowing me to let it read the whole folder helps, it identified a problem I didn't consider because it's a file outside of where I was looking.

[-] cecilkorik@lemmy.ca 94 points 3 days ago* (last edited 3 days ago)

No, I think you do get it. That's exactly right. Everything you described is absolutely valid.

Maybe the only piece you're missing is that "almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.

/s but also not /s because this is the unfortunate reality we live in now. We're all going to eat slop and sooner or later we're going to be forced to like it.

load more comments (6 replies)
[-] ozymandias@sh.itjust.works 8 points 2 days ago

you need to fully be able to program to work with these things, in my experience.
you have to explain what you want very specifically, in precise programming terms.

i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
but you still need to be able to find the bugs and fix them yourself.

oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.

load more comments (4 replies)
[-] CCMan1701A@startrek.website 2 points 2 days ago

I use AI for researching what existing software or projects exist to help my build up my system that I then suffer through making.

[-] tristynalxander@mander.xyz 5 points 2 days ago* (last edited 1 day ago)

Also working on some 3d maths.

I've used the free versions a bit, but not really to the extent that I'd call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don't know. It's also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It's very hit and miss on debugging. It'll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn't pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you're doing, and test that individual steps are doing what they need to do. The bots can't really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.

[-] dgdft@lemmy.world 46 points 3 days ago* (last edited 3 days ago)

Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.

Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.

You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).

I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.

For your use case (3D math), what I recommend is decomposing your end goal into a series of pure functions that you’ll string together. Once you have that list, that’s where Claude comes in. Have it stub those functions for you, then have it implement them one at a time, reviewing the output of every one before proceeding.

load more comments (3 replies)
[-] drmoose@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

It's a tool that you need to learn. Try some of claude.md files people share online for your programming area as a starter. You still need to review what it does but just asking for it to create tests as it creates code does a lot to improve output.

[-] x00z@lemmy.world 11 points 3 days ago

The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.

[-] wilmo@lemmy.ml 7 points 3 days ago

The more Emojis in that Readme the better!

[-] athatet@lemmy.zip 27 points 3 days ago

The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.

load more comments (2 replies)
[-] Blackmist@feddit.uk 10 points 3 days ago

I think it's mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you're using. JS or Python? It'll probably do OK. Plenty of open source for it to "learn" from. Delphi? Forget it.

Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.

[-] sobchak@programming.dev 16 points 3 days ago

Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that's not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you'll reach a point where you can't really improve the project anymore. I.e. AI tools are nearly useless.

[-] Gsus4@mander.xyz 39 points 3 days ago* (last edited 3 days ago)

Their usual (crap) defense is:

a) you're not paying enough, so of course it is crap

b) you're not prompting right, you need to use detailed, precise language...

c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.

d) it will improve...

(any other anyone has noticed?)

[-] webkitten@piefed.social 10 points 3 days ago* (last edited 3 days ago)

Don't just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.

A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.

[-] Feyd@programming.dev 7 points 3 days ago

verify with every output it produces.

I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they've output then you spend more time than if you'd just written it, rob yourself of experience, and melt glaciers for no reason in the process.

prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification

Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don't understand what it is or how it works at all and are being ridiculously irresponsible.

repetitive sections

Repetitive sections that are logic can be factored down and should be for maintainability. Those that can't be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you'll know nothing was hallucinated because it was deterministic in the first place.

user tests.

Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.

load more comments (1 replies)
[-] silver@das-eck.haus 6 points 2 days ago* (last edited 2 days ago)

I think it's pretty heavily dependent on what you're trying to do. I've gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I've spent a lot of time lately having copilot + opus write code for me. Most of what I'm doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it's a pretty good experience.

However, if I ask it to do something totally new, it does okay, more like what you've experienced. It takes a lot of hand holding, but it usually gets the job done as long as you're very descriptive in your prompt. Probably not faster than an experienced developer at the moment though

[-] Prove_your_argument@piefed.social 25 points 3 days ago

Have you been coding professionally long?

I find that the only time I can use these chatbots for a task I really need to already know what i'm doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there's so much usable that you can save a ton of time over doing everything yourself from scratch.

Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There's no emotions though, so you can just be blunt and concise with feedback.

load more comments (2 replies)
[-] JubilantJaguar@lemmy.world 7 points 3 days ago

Recently I used it (some free-tier DuckAI model, not Claude) to write a Python script for pasting PNGs into PDFs (complete with Tk interface) while applying a whole bunch of custom transformations. Simple enough, but a total chore with all the back-and-forth of searching for relevant unfamiliar libraries and syntax checking and troubleshooting. Inevitably it would have taken me the whole afternoon by hand. With AI I knocked it out in 25 minutes. That was my epiphany moment.

Since then I've noticed a general problem with AI coding. It almost always introduces too much complexity, which I then have to waste time untangling (and often just understanding) before I can proceed. Whereas if I had done it "my way" from the start I might have got there earlier. But I figure this problem is kinda on me.

load more comments (1 replies)
[-] pixxelkick@lemmy.world 23 points 3 days ago* (last edited 3 days ago)
  1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it'll see warnings/hints/suggestions from the lsp

  2. Unit tests. Unit tests. Unit tests. Unit tests.

I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

Instead of reasoning out "it should do this" they can just run the damn test and find out.

They'll iterate on it til it actually works and then you can look at it and confirm if its good or not.

I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.

But the unit tests catch this, and it corrects.

Example: I am working on my own fame engine with monogame and its about 95% vibe coded.

This transform math is almost 100% vibe coded: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame/Transform/TransformRegistry.cs

The reason its solid is because of this: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame.Tests/Transform/Integrations/TransformRegistryIntegrationTests.cs

Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.

And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.

Test Driven Development is huge for making agents self police their own code.

[-] kunaltyagi@programming.dev 10 points 3 days ago

Don't jump right in to coding.

Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it'll need to touch. If not, add tests (can use LLM for that as well).

Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins--1 hour for me without LLM, 15--30 mins with one, including code review)

If a task is 5 mins or so, it's usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks

[-] tohuwabohu@programming.dev 15 points 3 days ago

I use my own brain to sketch out what I want to work and how. Before writing any code, I use the LLM to point out gaps and how to close them. Pros and cons of certain decisions. Things you would discuss with colleagues. Then, I come up with a plan for the order I want the code to be written in and how to fragment that into smaller, easy to handle modules. I supervise and review each chunk produced, adapt code mostly manually if required, write the edge case tests - most importantly, run it - and move to the next. This is how I use it successfully and get results much faster than the traditional way.

At my job though I can witness how other people use it. I was asked to review a fully vibecoded fullstack app that contains every mistake possible. Unsanitizised input. Hardcoded tokens. Hardcoded credentials. 2500+ LoC classes and functions. Business logic orchestrators masquerading as service. Full table scans on each request. Cross-tenant data leaks. Loading whole tables into the memory. No test coverage for the most critical paths. Tests requiring external services to run. The list goes on. Now they want me to make it production ready in 8 weeks "because you have AI".

My point: This was an endorphine fueled vibecoding session by someone who has no experience as developer, asked the LLM to "just make it work", lacking the ability to supervise the work that comes with experience. It was enough to make it rum locally and pitch a "system engineered w/o any developer" to management.

Those systems need guidance just as a Junior would and I am strongly and loudly advocating to restrict access to this incredibly useful tool to people who know what they do. Nobody would allow a manager to use a laser cutter in a carpentry workshop without proper training, worst case is they will burn down the whole shack.

I appreciate you having a open mind about it at least. I needed some time to adjust as well. I don't even use Opus, most of the time my workflow consistently produces usable code with Sonnet. Maybe you can try what I explained initially? Just don't try any language you're not familiar with, that will not end well.

load more comments
view more: next ›
this post was submitted on 11 Apr 2026
245 points (90.4% liked)

Programming

26499 readers
117 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS