...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.
The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.
I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.
The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.
I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?
For reference, I used Opus 4.6 Extended.
I haven't used tools to make stuff from scratch but we do use them, or similar, where I work. What kind of stuff are you prompting it for? I find it works best when you give it a very small/simple task to do. And it's pretty good when it comes to making tests for existing code.
But if the main problem is getting math equations and such wrong I'm not sure there is much we can do to help. You'd have to provide it the equations at a minimum and probably explain to it how they should be used.
But there are definitely times where it can be very frustrating. I had a similar issue yesterday as you did. It made a code change and it wasn't working how it was supposed to. I kept telling it the problem and it kept trying to fix it but failing. I gave up after far too long and looked at all the code changes it made since it was working correctly before. It just put a change slightly too far down in a process and all I had to do was move it up, wholesale, by like 10 lines and it fixed my problem. Like, how could it not figure out something that simple?
So, it's not the best at actually fixing things but does work more often than not. But if you can tell it exactly what code is causing the problem and where you want it to be instead, it'll fix it.
If it's a small/simple task, why do I need help at all?
Because it might be something that needs to be done in lots of places. Or it may just be something you don't want to do so you fire it off then go look at or work on something else.
Now, that might be useless for your work flow, but not every tool is useful in every circumstance.
And you can still use it for larger tasks, but often I need to come behind it and clean up its work. Just like you would an intern or junior dev.
Because the simple tasks are boring as fuck?
If an LLM can generate 90% of a HTTP API correctly, why would you want to do it manually?
Because figuring out which 10% it did wrong and then fixing that will take longer and be more effort than just doing it from scratch myself.
You must type really fast then ๐
I personally read code a fuckton faster than I write it. And tests are for determining correctness, reading is just a part of it.