46
System76 on Age Verification Laws
(blog.system76.com)
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
You sound like a fool still using “vibe coding”
Maybe give it another shot. It’s better now. Or, yell at more clouds.
I tried it recently, it still doesn't work.
You will get more logic errors than you could possibly imagine. Even when you've fixed the obvious errors and it seems to run, you still have an enormous number of edge cases not handled that make your project a buggy, insecure mess.
It's the Dunning–Kruger effect again. If you can't do it yourself, you don't notice that AI can't do it either.
I can do it myself, and so I know how to tell it to do it for me. I know to set up tests to prove it all works like it should, and if the AI breaks something it will be apparent. This isn’t rocket science.
People just don’t know how to work with AI. Mostly because they “tried it recently” and failed miserably because the user didn’t bother putting in the effort. The AI is only as good as the prompts it gets from the human.
Once you understand it’s there to assist you and not do your entire job, you’ll get good at it. But getting good takes time. The first big project I made with AI doing most of the code was a big sloppy mess. That’s because I did it wrong, not the AI.
To effectively use AI, you have to be a good project manager AND a good developer. Those two skills are learned—with practice.
No, it isn't usually apparent. Most bugs from AI are subtle edge cases that will only show up later. One of those "later" instances night be when you try to run it on a different computer, or when another user presses buttons in an order you didn't try, or when you try to add a new feature, or worse, when someone tries to hack your code and finds all the common security vulnerabilities because that's what the AI has seen most often and copied.
AI can't think – it can only give results it has seen before or some combination of them. We already know AI can't produce an image of a full glass of wine, or tell you what year is next year on Jan 1st.
The technology can't work. No matter the input prompt, it just isn't capable of producing code with no bugs or edge cases. That requires thinking and logical deductions, both disproven by the examples in the previous paragraph.
My personal experience disagrees with you. I do not have the problems you seem to have or think exist. PEBCAK.
Edit: full image