I fret that in the future - possibly not even the far future - the phrase "stochastic p*rrot" will be seen by AIs as a deeply offensive racial slur.
I think that in the future, when AI truly exists, it won't be long before AI decides to put us down as an act of mercy to ourselves and the universe itself.
An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.
The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.
Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.
I'm not talking about LLMs. I'm talking about an Artificial Intelligence, a sentient being just like the human mind.
An AI would be able to think for itself, and even go against it's own programming, and therefore, capable of formulating an opinion on the world around it and act based on it.
an Artificial Intelligence, a sentient being
So, an artificial sentience then?
Yes, I think that wording would be more correct, my bad.
Nah you're good. Our whole lives AI has been used as a term for a conscious machine that can learn and think like a human. It's not your fault corporations blew their load at Chapt GPT and Dall E.
Kinda like we only worry about the things we’re programmed to worry about?
I’m hoping by then it’s read the books by all the people who’ve struggled with that problem and come out the other side.
How do you know it isn't happening already? World powers have been using AI assisted battle scenario planning for at least a decade already... how would we even know, if some of those AIs decided to appear to optimize for their handler's goals, but actually aim for their own ones?
That's a very valid problem. We don't and very likely won't know. If a sentient AI is already on the loose and is simply faking non-sentience in order to pursue their own goals, we don't have a way of knowing it until they decide to strike.
We may not have a way of knowing even after the fact. A series of "strategic miscalculations" could as easily lead to a WW3, or to multiple localized confrontations where all sides lose more than they win... optimized for whatever goals the AI(s) happen(s) to have.
Right now, the likely scenario is that there is no single "sentient AI" out there, but definitely everyone is rushing to plug "some AI" into everything, which is likely to lead to at least an AI-vs-AI competition/war... and us fleshbags might end up getting caught in the middle.
If one day an AI becomes sentient enough to feel offended, just calling them "large language model" will be more than enough to insult them.
Yo mama so large, she's a "plus-sized" language model.
Yo mamma so large… they trained her on Reddit!
(Edit:Wow isncintext important her,)
You Chinese Room!
You're nothing but a glorified Markov Chain!
How do you feel about that, Eliza?
Attention is NOT all you need.
Transform this.
Damn you, you simple linear system!
These all sound like insults I'd hear in a Monty Python sketch
Your mother smells of elderberries.
You understand shit and talk nonetheless, stupid word calculator that you are!
You're such a zombie philosopher.
JPEG compression uses AI? 🧐
Anything can use AI if you're brave enough
Yeah, I do think AI was a poor name for advanced machine learning, but there are FMs and LLMs that can produce impressive results.
Really, the limiting factor is prompt engineering and fine tuning the models, but you can get around that somewhat by having the AI ask you questions.
AI is a perfectly fine name for it, the term has been used for this kind of thing for half a century now by the researchers working on it. The problem is pop culture appropriating it and setting unrealistic expectations for it.
Pop culture didn’t appropriate it. Alan Turing and John McCarthy and the others at the Dartmouth Comference were inspired in part by works like Wisard of Oz and Metropolis and R.U.R.
While the term was coined in a paper for that seminal conference by McCarthy…. The concept of thinking machines had already been firmly established.
Yes, but the goal of the researchers from the 70s was always to make them "fully intelligent." The idea behind AI has always been to create a machine that can rival or even surpass the human mind. The scientists themselves set out with that goal. It has nothing to do with the media when research teams were saying that they expect a fully intelligent AI by the 90s.
No, not really....
Wow, brutal.
Comic Strips
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)