261
Proton is vibe coding some of its apps.
(lemmy.ml)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
much thanks to @gary_host_laptop for the logo design :)
The anti-AI circlejerk even here on lemmy is now just about as bad as the pro-AI circlejerk in the general public, no room for nuance or rational thinking, just dunking on everyone who say anything remotely positive about AI, like when I said I like the autocomplete feature of copilot.
I'm a pretty big generative AI hater when it comes to art and writing. I don't think generative AI can make meaningful art because it cannot come up with new concepts. Art is something that AI should be freeing up time in our lives for us to do. But that's not how it's shaping up.
However, AI is very helpful for understanding codebases and doing things like autocompletion. This is because code is less expressive than human language and it's easier for AI to approximate what is necessary.
You're not alone. Nuance is just harder to convey, takes more effort to post something nuanced. And so people do it less, myself included. But I think truthfully that many people are not so stuck in one or the other circlejerks. It's lovely to see people in this thread who are annoyed by both.
I'm personally scared of AI (not angry or hateful, actually scared by just how fast it's advancing) and that definitely clouds my judgement of it and makes nuance difficult.
It's like a deal with the devil. You see all these amazing benefits but you just know you're the one being taken advantage of, because, like the devil, AI corporations by definition only think about how you can be of use to them.
Natural language processing makes TTS way more usable for people with reading disabilities. But there are absolutely no good uses of AI.
What about cancer research? AI is bad when it's being used to find cures?
People refer to generative AI when they just say "AI" nowadays.
There are a ton of small, single purpose neural networks that work really well, but the "general purpose" AI paradigm has wiped those out in the public consciousness. Natural language processing and modern natural sounding text to speech are by definition AI as they use neural networks, but they're not the same as ChatGPT to the point that a lot of people don't even consider them AI.
Also AI is really good at computing protein shapes. Not in a "ChatGPT is good enough that it's not worth hiring actual writers to do it better" way, in a "this is both faster and more accurate than any other protein folding algorithm we had" way.
Yeah, people don't realize how huge this kind of thing is. We've been trying for YEARS to figure out how to correctly model protein structures of novel proteins.
Now, people have trained a network that can do it and, using the same methods to generate images (diffusion models), they can also describe an arbitrary set of protein properties/shapes and the AI will generate a string of amino acids which are most likely to create it.
The LLMs and diffusion models that generate images are neat little tech toys that demonstrate a concept. The real breakthroughs are not as flashy and immediately obvious.
For example, we're starting to see AI robotics, which have been trained to operate a specific robot body in dynamic situations. Manually programming robotics is HARD and takes a lot of engineers and math. Training a neural network to operate a robot is, comparatively, a simple task which can be done without the need for experts (once there are Pretrained foundational models).