Obviously fake. Still funny though.
Are you saying the comment is fake, or the sentiment? This was actually posted to reddit: https://archive.is/U9ntj
Fake in that it's almost assuredly written and posted by someone who is actively anti-vibe coding and this is a troll on the true believers.
I don’t really care about vibe coders but as a dev with just under 2 decades in the field:
- Your vibe coding shit will not go to prod until humans fully review it
- You better review it yourself first before offloading that massive mental drain to someone else (which means you still need to have some semblance of programming skills). Don’t open a PR with 250 files in it and then tell someone else to validate it.
- Use more context. Don’t give it vague ass prompts.
- Don’t use auto-accept. That’s just lazy asshole shit.
I can’t stress this enough: if you give me a PR with tons of new files and expect me to review it when you didn’t even review it yourself, I will 100% reject it and make you do it. If it’s all dumped into a single commit, I will whip your computer into the nearest body of water and tell you to go fish it out.
I don’t care what AI tool wrote your code. You’re still responsible for it and I will blame you.
When I see a sloppy PR I remind people “AI didn’t write that. You wrote it. Your name is on the git blame.”
Love it, I have a vibe coding colleague I will use this with.
Vibe coding tools are very useful when you want to make a tech movie but the hollywood
command just does not cut it.
Vibe coding is useful for super basic bash scripting and that's about it. Even that it will mess up but usually in a suler easily fixed way
I don't think it has much to do with how "complex or not" it is, but rather how common it is.
It can completely fail on very simple things that are just a bit obscure, so it has too little training data.
And it can do very complex things if there's enough training data on those things.
I've also found it useful for simple Python scripts when I need to analyze data. I don't use pandas/scipy/numpy/matplotlib enough to remember the syntax and library functions. With vibe coding it, I can have a script in minutes for reading a csv with weird timestamps, scale some of the channels, filter out noise or detrending, perform a Fourier transform, and do a curve fit towards a model.
But then obviously I know every intermediate step I want to do.
When I want to be lazy and make some simple excel macros is about the most iv trusted it with that it manages to do with out fucking up and taking more time then just doing it my self.
No way. Youtube ad told me a different story the other day. Could that be a... lie? (shocked_face.jpg)
AI used extremely sparingly is sometimes helpful to an experienced coder. "Multivac, generate a set of unit tests for this function." Okay, some of these are dumb, but it's easier getting started on this mess than just looking at a blank buffer. Helps get the juices flowing a bit. But man, you try to actually do anything with it, and suddenly you're lost chasing a will-o'-wisp.
Oh man, I love ChatGPT for one thing in particular: "Hey chatbot, is there some library or standard library function for that very specific, yet still kinda generic thing I'm trying to do, so that I don't have to write it myself?"
It does frequently give a helpful answer. That is, it doesn't give me working code, but a helpful pointer to some manual where I can find good instructions for how to use the thing to solve my problem.
I don't want to dismiss your point overall, but I see that example so often and it irks me so much.
Unit tests are your specification. So, 1) ideally you should write the specification before you implement the functionality. But also, 2) this is the one part where you really should be putting in your critical thinking to work out what the code needs to be doing.
An AI chatbot or autocomplete can aid you in putting down some of the boilerplate to have the specification automatically checked against the implementation. Or you could try to formulate the specification in plaintext and have an AI translate it into code. But an AI without knowledge of the context nor critical thinking cannot write the specification for you.
Tests are probably both the best and worst things to use LLMs for.
They're the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they're seeing.
OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don't understand anything, so I'd never trust one to come up with a good set of things to test.
Unit tests become the specification once they are written. ChatGPT can easily write unit tests from whatever your specification is before that -- such as documentation, a bunch of comments and stubs, or even a first draft of the function itself, given enough context from the rest of the project.
Unit tests are too klunky to think in. You don't prototype the specification by implementing unit tests. And you really only lay down a few critical paths even if you "write the tests first" because code paths always come up during implementation that demand more test coverage anyway.
My entire IT career has been funded by morons like this. This is just the latest moronic idea that is going to pay my bills. Cleaning up after vibe coders has guaranteed my income until I die. You see, posts like this focus on the code that is broken and requires another dev to fix it enough to get it going. There is a long road from "finally working" to "production ready" to "optimized", and we get paid along every inch of the way.
The AI Fix podcast had a piece about how someone let an AI agent do the coding for them but had a disaster because he gave it access to the production database.
Very funny.
https://theaifix.show/61-replit-panics-deletes-1m-project-ai-gets-gold-at-math-olympiad/
A buddy of mine is into vibe coding, but he actually does know how to code as well. He will reiterate through the code with the llm until he thinks it will work. I can believe it saves time, but you still have to know what you are doing.
The most amazing thing about vibe coding is that in my 20 odd years of professional programming the thing I’ve had to beg and plead for the most was code reviews.
Everyone loves writing code, no one it seems much enjoyed reading other people’s code.
Somehow though vibe coding (and the other LLM guided coding) has made people go “I’ll skip the part where I write code, let an LLM generate a bunch of code that I’ll review”
Either people have fundamentally changed, unlikely, or there’s just a lot more people that are willing to skim over a pile of autogenerated code and go “yea, I’m sure it’s fine” and open a PR
Swear to god the vibe coding movement is going to create so many new jobs in the ilk of "I hired some dude to write my startup thing and now it's gone all to shit, can you make it actually good and scalable and responsive please?"
"What do you do? "Oh, I work in AI Disaster Response"
imo paying devs to review vibe coded bile would not work either. At best, the dev themselves should do the vibe coding.
Someone who has no clue whatsoever in terms of programming cannot give it the right prompt.
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics