671
"Autocomplete" by Zach Weinersmith (files.mastodon.social)

@ZachWeinersmith@mastodon.social

source (Mastodon)

all 42 comments
sorted by: hot top controversial new old
[-] FaceDeer@kbin.social 47 points 2 years ago

I fret that in the future - possibly not even the far future - the phrase "stochastic p*rrot" will be seen by AIs as a deeply offensive racial slur.

[-] DarkenLM@kbin.social 25 points 2 years ago

I think that in the future, when AI truly exists, it won't be long before AI decides to put us down as an act of mercy to ourselves and the universe itself.

[-] skulblaka@kbin.social 13 points 2 years ago

An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.

The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.

Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.

[-] DarkenLM@kbin.social 17 points 2 years ago

I'm not talking about LLMs. I'm talking about an Artificial Intelligence, a sentient being just like the human mind.

An AI would be able to think for itself, and even go against it's own programming, and therefore, capable of formulating an opinion on the world around it and act based on it.

[-] kuberoot@discuss.tchncs.de 4 points 2 years ago

an Artificial Intelligence, a sentient being

So, an artificial sentience then?

[-] DarkenLM@kbin.social 3 points 2 years ago

Yes, I think that wording would be more correct, my bad.

[-] MooseLad@lemmy.world 3 points 2 years ago

Nah you're good. Our whole lives AI has been used as a term for a conscious machine that can learn and think like a human. It's not your fault corporations blew their load at Chapt GPT and Dall E.

[-] intensely_human@lemm.ee 1 points 2 years ago

Kinda like we only worry about the things we’re programmed to worry about?

[-] intensely_human@lemm.ee 0 points 2 years ago

I’m hoping by then it’s read the books by all the people who’ve struggled with that problem and come out the other side.

[-] jarfil@lemmy.world 0 points 2 years ago

How do you know it isn't happening already? World powers have been using AI assisted battle scenario planning for at least a decade already... how would we even know, if some of those AIs decided to appear to optimize for their handler's goals, but actually aim for their own ones?

[-] DarkenLM@kbin.social 2 points 2 years ago

That's a very valid problem. We don't and very likely won't know. If a sentient AI is already on the loose and is simply faking non-sentience in order to pursue their own goals, we don't have a way of knowing it until they decide to strike.

[-] jarfil@lemmy.world 2 points 2 years ago

We may not have a way of knowing even after the fact. A series of "strategic miscalculations" could as easily lead to a WW3, or to multiple localized confrontations where all sides lose more than they win... optimized for whatever goals the AI(s) happen(s) to have.

Right now, the likely scenario is that there is no single "sentient AI" out there, but definitely everyone is rushing to plug "some AI" into everything, which is likely to lead to at least an AI-vs-AI competition/war... and us fleshbags might end up getting caught in the middle.

[-] brsrklf@jlai.lu 15 points 2 years ago

If one day an AI becomes sentient enough to feel offended, just calling them "large language model" will be more than enough to insult them.

[-] ApostleO@startrek.website 11 points 2 years ago

Yo mama so large, she's a "plus-sized" language model.

[-] FuglyDuck@lemmy.world 4 points 2 years ago* (last edited 2 years ago)

Yo mamma so large… they trained her on Reddit!

(Edit:Wow isncintext important her,)

[-] bionicjoey@lemmy.ca 21 points 2 years ago
[-] WeirdAlex03@lemmy.zip 18 points 2 years ago

You're nothing but a glorified Markov Chain!

[-] FaceDeer@kbin.social 8 points 2 years ago

How do you feel about that, Eliza?

[-] Rolando@lemmy.world 7 points 2 years ago

Attention is NOT all you need.

[-] danielbln@lemmy.world 9 points 2 years ago

Transform this.

[-] bionicjoey@lemmy.ca 2 points 2 years ago

Damn you, you simple linear system!

[-] ImplyingImplications@lemmy.ca 18 points 2 years ago

These all sound like insults I'd hear in a Monty Python sketch

[-] grabyourmotherskeys@lemmy.world 10 points 2 years ago

Your mother smells of elderberries.

[-] Norgur@kbin.social 16 points 2 years ago

You understand shit and talk nonetheless, stupid word calculator that you are!

[-] worldsayshi@lemmy.world 9 points 2 years ago

You're such a zombie philosopher.

[-] jarfil@lemmy.world 5 points 2 years ago

JPEG compression uses AI? 🧐

[-] Player2@sopuli.xyz 9 points 2 years ago

Anything can use AI if you're brave enough

[-] EatYouWell@lemmy.world 5 points 2 years ago

Yeah, I do think AI was a poor name for advanced machine learning, but there are FMs and LLMs that can produce impressive results.

Really, the limiting factor is prompt engineering and fine tuning the models, but you can get around that somewhat by having the AI ask you questions.

[-] FaceDeer@kbin.social 6 points 2 years ago* (last edited 2 years ago)

AI is a perfectly fine name for it, the term has been used for this kind of thing for half a century now by the researchers working on it. The problem is pop culture appropriating it and setting unrealistic expectations for it.

[-] FuglyDuck@lemmy.world 5 points 2 years ago

Pop culture didn’t appropriate it. Alan Turing and John McCarthy and the others at the Dartmouth Comference were inspired in part by works like Wisard of Oz and Metropolis and R.U.R.

While the term was coined in a paper for that seminal conference by McCarthy…. The concept of thinking machines had already been firmly established.

[-] MooseLad@lemmy.world 1 points 2 years ago

Yes, but the goal of the researchers from the 70s was always to make them "fully intelligent." The idea behind AI has always been to create a machine that can rival or even surpass the human mind. The scientists themselves set out with that goal. It has nothing to do with the media when research teams were saying that they expect a fully intelligent AI by the 90s.

[-] BassaForte@lemmy.world 3 points 2 years ago

No, not really....

[-] synapse1278@lemmy.world 3 points 2 years ago

Wow, brutal.

this post was submitted on 28 Oct 2023
671 points (97.9% liked)

Comic Strips

17897 readers
458 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS