238
Proton is vibe coding some of its apps.
(lemmy.ml)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
much thanks to @gary_host_laptop for the logo design :)
I'd say this is mostly because you can immediately test the AI's results and rule out anything it got wrong, and whatever errors you generate can then be fed back into the AI so it can refine what it's already written. You never have to just trust the AI (assuming you yourself still know how to code) like you have to when using it for research or for solving problems where you don't get immediate feedback.
Whether this means programming is actually a viable niche for generative AI or whether this speaks more to the limitations and inherent unreliability of the "knowledge" the AI has, I can't say.
Also, I don't know if it's just me but I'm more scared by how fast AI is advancing rather than looking forward to what it can do for me. That definitely clouds my perception when something is AI generated and makes me a lot more dismissive of any real benefits AI might have brought.
It will allow you to see if the AI has made any syntax or runtime errors. It does not tell you about any logic errors.
Logic errors are already the most dangerous kind of programming error, and using AI just makes them even harder to find.
Using AI will only help you with syntax (which any good IDE should already be able to do) and finding information faster than a search engine (but leaving out important context). AI is not useful for programming anything that will be made public.
The danger of vibe coding is that the people doing it either don't have the skills to or don't think it's importsnt to review the AI changes.
If you work with an AI and instead of taking time typing through boring tasks, take time reading through the changes, them there isn't much of an issue. A skilled software engineer is capable of noticing logic errors in a code they read.
If the generated code is too unmecessarily complex to ensure its logic is okay, then scrap it.
I don't use it in that way (only use JetBrains' line completion AI) but I don't see a problem if it is used that way.
However, if I review a code that was partly generated by AI and notice that the dev let through shitty code without review, the review will be salty.
Yeah, you get immediate feedback, vs a scenario where you have to manually check the “facts” it provides in order to ensure it’s not hallucinating. I’ve had Copilot straight up hallucinate functions on me and I knew that they were bullshit instantly.
I iterate with it a ton and feed it back errors it makes, or things like type mismatches. It fixes them instantly and understands the issue almost every single time.
That’s the trick. Iterate often and always give it new instructions if it does something stupid. Basically be as verbose as needed and give it tons of context, desired standards, pitfalls to avoid, whatever. It helps a ton.