246

...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] silver@das-eck.haus 6 points 1 week ago* (last edited 1 week ago)

I think it's pretty heavily dependent on what you're trying to do. I've gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I've spent a lot of time lately having copilot + opus write code for me. Most of what I'm doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it's a pretty good experience.

However, if I ask it to do something totally new, it does okay, more like what you've experienced. It takes a lot of hand holding, but it usually gets the job done as long as you're very descriptive in your prompt. Probably not faster than an experienced developer at the moment though

[-] OpenStars@piefed.social 6 points 1 week ago

The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

This is part of your problem right there. The correct word there, instead of "write", is "right". You emotionally typed out a message, got your dopamine hit, then felt satisfied, and now the rest of us have to figure out what you meant to say.

Which is fine, but now imagine that not only you can do this, but AI can do it as well...

If you want something done correctly, then you must do it yourself.

[-] TBi@lemmy.world 5 points 1 week ago

You just didn’t use the right prompts!!!!

/s

[-] tristynalxander@mander.xyz 5 points 1 week ago* (last edited 1 week ago)

Also working on some 3d maths.

I've used the free versions a bit, but not really to the extent that I'd call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don't know. It's also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It's very hit and miss on debugging. It'll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn't pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you're doing, and test that individual steps are doing what they need to do. The bots can't really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.

[-] drmoose@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

It's a tool that you need to learn. Try some of claude.md files people share online for your programming area as a starter. You still need to review what it does but just asking for it to create tests as it creates code does a lot to improve output.

[-] sirdorius@programming.dev 4 points 1 week ago

I've just started recently using Claude after being very unimpressed with Copilot, but my current theory is that you should treat everything it writes like a PoC that you found in some obscure github repo. Use it as a reference that you can generate quickly, take out only the good parts, adapt them to your context. It's harder to delete code than to write it, so it's easier to just take what you like from its output, rather than try to clean up all the nonsense it generates.

How accurate that is and how useful it is compared to just writing it from scratch varies a lot based on your particular project. You still need a good understanding of the output it produces, otherwise those subtle bugs and low quality adds up. The times it's the most useful are when it writes a lot of stuff that I would've written myself, but I can point to some detail and say "that's wrong, I'll write it myself".

[-] colournoun@beehaw.org 4 points 1 week ago

regress with old bugs

Have it write a test suite that enforces the correct behavior, and tell it that the test suite must pass after any change. Make sure it’s not cheating (return true) inside the test suite.

[-] arthur@lemmy.zip 4 points 1 week ago

I'm using (Gemini 3.1 pro in) Gemini cli to build a complex (personal) project to explore how to use these tools. My impression is that the code produced by LLMs is disposable/throwaway. We need to babysit the model and be very hands on to get good results.

[-] shaggy@beehaw.org 3 points 1 week ago

I've had an opposite experience. Here are some guidelines I follow:

  1. Setup a foundation of rules and knowledge for Claude to fall back on. I define expectations, common definitions, behaviors and anything else that's not project specific upfront.
  • in Claude.md I reference different domains of behavior, definitions, and rules (Claude has conventions for storing this type of stuff, so ask it to handle organizing information too)
  • create a top-level project definition: this defines what "knowledge" is. It allows you to build up what Claude knows later on as you work on your project. "Update knowledge", "add this to your knowledge", etc
  • create a top-level rule: all information in knowledge must have one source of truth. Whenever needed reference the original knowledge source instead of duplicating it. Now you can ask it to "review your knowledge", "audit and flag knowledge"
  1. explicitly explain everything, leave nothing ambiguous; explain like you're explaining the problem to a new developer who's not familiar with the plan or codebase at all. Don't ask it to write code right away. Ask it to write a plan/spec. Review the plan, make changes and discuss it until the plan is 100%. This plan can include implementation details if you're ok with that, but it's not necessary (sometimes I write a separate referenced file called implementation.md beside the plan and have the plan reference it.).
  • Your role as a developer is shifting from writing code, to writing specs, and reviewing code
  1. Once there is nothing left to describe, and no ambiguity in your plan, have it use your plan to write the code. This works amazingly well for me.

A benefit to this method is that there is less wasted effort on my part. If Claude writes the code wrong, I can trace the reason for the mistake to a gap in the plan. I can then update the plan, throw away the code (if I have to), and have Claude reimplement the code again.

Rinse and Repeat.


Keep knowledge, plans, and implementation details clearly separated (you can copy your latest successful knowledge files to new projects to get started on future projects even faster).

Keep the goals of each plan as small and granular as possible (easier the define plans). Knowledge, plans, and implementation details all get tracked in your repository just like your code does.


I'm a career developer, and have been writing code for over 20 years. I'm adding this bit because I understand how AI driven development can look like a threat to developers. Over this last year, I've had a shift in this thinking though. I can take what I've learned through my career and use it to inform writing successful specifications Claude can use to write effective code. Claude may not solve all of our coding problems, but if used effectively, it solves nearly everything you throw at it.

load more comments (8 replies)
[-] lakemalcom@sh.itjust.works 3 points 1 week ago

I have yet to be able to vibe code anything relatively involved. The closest I've come is a ffmpeg wrapper script to edit out scenes from a video with a fade in/fade out title card. But even then, I ended up at some point having to debug and add my own arg support because it kept screwing things up. The first draft did do something, though.

I find at this point that it's still only useful if I have a very clear goal in mind with a lot of context on the area I need to make changes to. That lets me get a more specific prompt, and then I'll still need to review the output. I have only ever gotten a successful one shot like this with tests.

[-] Alexstarfire@lemmy.world 3 points 1 week ago

I haven't used tools to make stuff from scratch but we do use them, or similar, where I work. What kind of stuff are you prompting it for? I find it works best when you give it a very small/simple task to do. And it's pretty good when it comes to making tests for existing code.

But if the main problem is getting math equations and such wrong I'm not sure there is much we can do to help. You'd have to provide it the equations at a minimum and probably explain to it how they should be used.

But there are definitely times where it can be very frustrating. I had a similar issue yesterday as you did. It made a code change and it wasn't working how it was supposed to. I kept telling it the problem and it kept trying to fix it but failing. I gave up after far too long and looked at all the code changes it made since it was working correctly before. It just put a change slightly too far down in a process and all I had to do was move it up, wholesale, by like 10 lines and it fixed my problem. Like, how could it not figure out something that simple?

So, it's not the best at actually fixing things but does work more often than not. But if you can tell it exactly what code is causing the problem and where you want it to be instead, it'll fix it.

[-] OwOarchist@pawb.social 6 points 1 week ago

I find it works best when you give it a very small/simple task to do.

If it's a small/simple task, why do I need help at all?

[-] Alexstarfire@lemmy.world 3 points 1 week ago

Because it might be something that needs to be done in lots of places. Or it may just be something you don't want to do so you fire it off then go look at or work on something else.

Now, that might be useless for your work flow, but not every tool is useful in every circumstance.

And you can still use it for larger tasks, but often I need to come behind it and clean up its work. Just like you would an intern or junior dev.

load more comments (3 replies)
[-] saplyng@lemmy.world 3 points 1 week ago

I've also started using it recently and I'm not sure if the way I'm doing it is particularly "right".

I don't have a lot of knowledge of practical coding practices because in school we literally had a new project every two weeks so I never learned things like you need unit tests or proper architectural design. It was mostly making sure whatever project there was that week ran and didn't crash.

So now I'm working as a sysadmin doing the random junk a sysadmin gets pushed on them. What I've been doing is telling it my project plan, Claude will write up something that looks better, and I continue to have a back and forth about architecture and libraries, asking it if it thinks any particular idea is good or bad, until I get to a place I'm happy.

Then because I want to learn rust and implement it myself, I'm having Claude basically guide me through creating it like a teacher would, with it taking on a very Socratic tone ("now that we've done this, what do you think is the next step?" "We have a list of CSVs so what do you need to do to read their values?"). And I've been moving forward but by bit like this.

I don't know if it's a particularly good way, honestly, I'd love feedback from anyone who's done something similar or whatever!

[-] ZoteTheMighty@lemmy.zip 2 points 1 week ago

That's been my experience. It's always subtlely wrong, its solutions are hard to maintain, and if you spend too much time with it, it starts forgetting what you said earlier. Managers don't understand the distinction, they already can't code well, and only test it in small problems where it's not context-limited, so they're amazed.

[-] Bonje@lemmy.world 2 points 1 week ago

Our work started giving Claude access. Plugging sonnet 4.6 in with opencode I had it do some terragrunt code. It was mostly correct. Highly documented languages seem to be its best. The modules I had it write cost 4 bucks of tokens total.

It just gave insane ick using it though. I might just resign to using it though because of our backlog and burn out.

[-] ReallyCoolDude@lemmy.ml 2 points 1 week ago

I read a lot of these posts that sadly leave out the basic parts: what were your prompts? What does it means in this context 'vibe coding'? Did you create an initial setup, and slowly build up? Did you left wverything to the agent understanding, and just pushed approve or reject? There are multiple levels of quality that depends on the input. Did you get into context rotting? 3d math means vector math, matrices, or what? Given claude has a serious problem from march at least, the way u use it is paramount. In our team we all use claude with copilot ( sadly, that is a business directive ), and while excpetional at finding small relationships in components and microservices, had to build a long list of skills just to be barely usable in a 'star trek' way. The bottom line is that it is that you must be extremely precise when asking. Prompt modeling count a lot. Context build as well. For now, unit tests and data/mocks refactors are working extremely well for me, when i define the tests cases. My agents got to a point where i can safely have small peoperty additions with refactors on multiple repositories at once ( ie: i change the contract on microservice a, microservices b,c,and d are automatically updated ). This last part had to.be built thoug, with memory, engrams, and some fune tuing. It is not always a shit: if not nobody would use it. It is not this revolutionary technology that will make humans obsolete as well ( as they are selling it ).

[-] CCMan1701A@startrek.website 2 points 1 week ago

I use AI for researching what existing software or projects exist to help my build up my system that I then suffer through making.

[-] AlphaOmega@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

This sounds on par for all the AI I have been dealing with. I find it works best if you give it a lot of rules, then treat it like a 12 year old and expect wild mistakes for anything more complicated than a simple calculator. I work primarily with Gemini and have it build simple HTML/CSS and it's infuriating how many times I have told it to use &amp ; instead of &.
Now every time it does anything, it's always telling me how it included the correct ampersand. It can't tell me why it screwed up like 5 times prior, it just makes up some BS and apologizes profusely.
The more rules you give it, even if it ignores them sometimes, the better.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 11 Apr 2026
246 points (90.5% liked)

Programming

26634 readers
175 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS