91
I'm losing faith
(lemmy.ml)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
For people to try and effectively respond to those issues they must be able to communicate privately with no fear of retribution.
It also requires private and secure communications to be a normal thing and not an indicator that the parties involved are criminals, terrorists or pedos.
Ai by the way is going to be just a little fart compared to the other issues you described as well as the lack of privacy.
Also a problem related to privacy is our growing dependence on private corps to be a 'normal' person ("why you don't have a Facebook/Google/Amazon/etc account ?")
AI could kill everyone, though it most likely won't IMO. 10% chance I think. That's still very bad though. Despite the fact that Ilya Sutskever, Geoff Hinton, MIRI, heck even Elon Musk have expressed varying degrees of concern about this, it seems the risk here is largely dismissed because it sounds too much like science fiction. If only science fiction writers had avoided the topic!
This is bullshit. AI will be hunting down survivors? Thus more lethal than nuclear war? ChatGTP4 will be better at it?
Most of these concern seem to be about AGI which we are nowhere close to having and have no clear path to. Our "AI"s not only do not understand causality but don't have the ability to perform arithmetic. Nor do they run stuff that could kill humans. Except if you consider Tesla's FSD an AI system, but Musk assured us back in 2017 it would be safe..
where did you get the idea that gpt4 is capable of this? this is concerns for 10+ years from now, assuming AI makes the same strides is has in the past 10 years, which is not guaranteed at all.
I think there are probably 3-5 big leaps still required, on the order of the invention of transformer models, deep learning, etc., before we have superintelligence.
Btw humans are also bad at arithmetic. That's why we have calculators. if you don't understand that LLMs use RAG, langchain (or similar), and so on, you clearly don't understand the scope of the problem. Superintelligence doesn't need access to anything in particular except, say, email or chat to destroy the world.