48

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

you are viewing a single comment's thread
view the rest of the comments
[-] bownage@beehaw.org 11 points 1 year ago

By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

Ummm how about the obvious answer: most AI researchers won't think they're the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

[-] alexdoom@beehaw.org 12 points 1 year ago

The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

[-] LoamImprovement@beehaw.org 6 points 1 year ago

Capitalism. Be afraid of this thing, not of that thing. That thing makes people lots of money.

[-] exohuman@kbin.social 6 points 1 year ago

I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

[-] alexdoom@beehaw.org 4 points 1 year ago

I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.

[-] cnnrduncan@beehaw.org 2 points 1 year ago

Yeah - green energy puts coal miners and oil drillers out of work (as the right likes to constantly remind us) but that doesn't make green energy evil or not worth pursuing, it just means that we need stronger social programs. Same with AI in my opinion - the potential benefits far outweigh the harm if we actually adequately support those whose jobs are replaced by new tech.

[-] Phantom_Engineer@lemmy.ml 1 points 1 year ago

That's only a problem because of our current economic system. The AI isn't the problem, the society that fails to adapt is.

[-] fsniper@kbin.social 4 points 1 year ago

I think that the results are "high" as much as 10 percent because the researcher do not want to downplay how "intelligent" their new technology is. But it's not that intelligent as we and they all know it. There is currently 0 chance any "AI" can cause this kind of event.

[-] aksdb@feddit.de 1 points 1 year ago

Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.

[-] Spzi@lemm.ee 1 points 1 year ago

the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

Yes, the current state is not that intelligent. But that's also not what the expert's estimate is about.

The estimates and worries concern a potential future, if we keep improving AI, which we do.

This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won't stay at that level, and then they can very well become a threat.

this post was submitted on 26 Jun 2023
48 points (100.0% liked)

Technology

37692 readers
317 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS