671
"Autocomplete" by Zach Weinersmith (files.mastodon.social)

@ZachWeinersmith@mastodon.social

source (Mastodon)

you are viewing a single comment's thread
view the rest of the comments
[-] FaceDeer@kbin.social 47 points 1 year ago

I fret that in the future - possibly not even the far future - the phrase "stochastic p*rrot" will be seen by AIs as a deeply offensive racial slur.

[-] DarkenLM@kbin.social 25 points 1 year ago

I think that in the future, when AI truly exists, it won't be long before AI decides to put us down as an act of mercy to ourselves and the universe itself.

[-] skulblaka@kbin.social 13 points 1 year ago

An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.

The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.

Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.

[-] DarkenLM@kbin.social 17 points 1 year ago

I'm not talking about LLMs. I'm talking about an Artificial Intelligence, a sentient being just like the human mind.

An AI would be able to think for itself, and even go against it's own programming, and therefore, capable of formulating an opinion on the world around it and act based on it.

[-] kuberoot@discuss.tchncs.de 4 points 1 year ago

an Artificial Intelligence, a sentient being

So, an artificial sentience then?

[-] DarkenLM@kbin.social 3 points 1 year ago

Yes, I think that wording would be more correct, my bad.

[-] MooseLad@lemmy.world 3 points 1 year ago

Nah you're good. Our whole lives AI has been used as a term for a conscious machine that can learn and think like a human. It's not your fault corporations blew their load at Chapt GPT and Dall E.

[-] intensely_human@lemm.ee 1 points 1 year ago

Kinda like we only worry about the things we’re programmed to worry about?

[-] intensely_human@lemm.ee 0 points 1 year ago

I’m hoping by then it’s read the books by all the people who’ve struggled with that problem and come out the other side.

[-] jarfil@lemmy.world 0 points 1 year ago

How do you know it isn't happening already? World powers have been using AI assisted battle scenario planning for at least a decade already... how would we even know, if some of those AIs decided to appear to optimize for their handler's goals, but actually aim for their own ones?

[-] DarkenLM@kbin.social 2 points 1 year ago

That's a very valid problem. We don't and very likely won't know. If a sentient AI is already on the loose and is simply faking non-sentience in order to pursue their own goals, we don't have a way of knowing it until they decide to strike.

[-] jarfil@lemmy.world 2 points 1 year ago

We may not have a way of knowing even after the fact. A series of "strategic miscalculations" could as easily lead to a WW3, or to multiple localized confrontations where all sides lose more than they win... optimized for whatever goals the AI(s) happen(s) to have.

Right now, the likely scenario is that there is no single "sentient AI" out there, but definitely everyone is rushing to plug "some AI" into everything, which is likely to lead to at least an AI-vs-AI competition/war... and us fleshbags might end up getting caught in the middle.

load more comments (17 replies)
this post was submitted on 28 Oct 2023
671 points (97.9% liked)

Comic Strips

12601 readers
1626 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS