I use Claude for SQL and PowerQuery whenever I brain fart.
There's more usefulness in reading its explanation than its code, though. It's like bouncing ideas back off someone except you're the one that can actually code them. Never bother copying it's code unless it's a really basic request that's quicker to type than to code.
Bad quality and mass quantity in is obviously much quicker for LLMs and people that don't understand the tech behind AI don't understand this actually what's going on, so it's "magic". A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and "gross". There's minimal check iterations it can do and how would it even do them when it's knowledge base is more bullshit than it is quality?
Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It'd take longer for AI to self-learn these too without significant input from the trainer.
Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.
Yeah, get too far in or give it too much to start with, it can't handle it. You can see this with visual generators. "Where's the lollypop in its hand? Try again... Okay now you forgot about the top hat."
Have to treat them like simple interns that will do anything to please rather than admit the task is too complex or they've forgotten what they were meant to do.