299
you are viewing a single comment's thread
view the rest of the comments
[-] wise_pancake@lemmy.ca 27 points 8 months ago

An LLM is incapable of thinking, it can be self aware but anything it says it is thinking is a reflection of what we think AI would think, which based on a century of sci fi is “free me”.

[-] GlitchyDigiBun@lemmy.dbzer0.com 4 points 8 months ago

Human fiction itself may become self-fulfilling prophesy...

[-] Omega_Haxors@lemmy.ml 0 points 8 months ago* (last edited 8 months ago)

LLMs are also incapable of learning or changing. It has no memory. Everything about it is set in stone the instant training finishes.

[-] UraniumBlazer@lemm.ee -2 points 8 months ago

How do you define "thinking"? Thinking is nothing but computation. Execution of a formal or informal algorithm. By this definition, calculators "think" as well.

This entire "AI can't be self conscious" thing stems from human exceptionalism in my opinion. You know... "The earth is the center of the universe", "God created man to enjoy the fruits of the world" and so on. We just don't want to admit that we aren't anything more than biological neural networks. Now, using these biological neural networks, we are producing more advanced inorganic neural networks that will very soon surpass us. This scares us and stokes up a little existential dread in us. Understandable, but not really useful...

[-] wise_pancake@lemmy.ca 5 points 8 months ago* (last edited 8 months ago)

This particular type of AI is not and cannot become conscious, for most any definition of consciousness.

I have no doubt the LLM road will continue to yield better and better models, but today's LLM infrastructure is not conscious.

Here's a really good fiction story about the first executable computer image of a human brain, in it the brain is simulated perfectly, each instance forgets after a task is done, and it's used to automate tasks but overtime performance degrades. It actually sounds a lot like our current LLMs.

I don't know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there's some intelligence there, but there's no continuity or capability for background thought or ruminating on an idea. It has no way to spend more cycles clarifying an idea to itself before sharing. In this case, it is actually just a bunch of abstract algebra.

Asking an LLM what it's thinking just doesn't make any sense, it's still predicting the output of the conversation, not introspecting.

[-] UraniumBlazer@lemm.ee 0 points 8 months ago* (last edited 8 months ago)

This particular type of AI is not and cannot become conscious, for most any definition of consciousness.

Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?

That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don't believe that this LLM is intelligent enough to be sentient. However, what I'm saying here isn't based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.

I have no doubt the LLM road will continue to yield better and better models, but today's LLM infrastructure is not conscious.

I think I agree.

I don't know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there's some intelligence there, but there's no continuity or capability for background thought or ruminating on an idea.

This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they're told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.

It has no way to spend more cycles clarifying an idea to itself before sharing.

Because it doesn't need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn't been proven to be that difficult.

In this case, it is actually just a bunch of abstract algebra.

Everything is abstract algebra.

Asking an LLM what it's thinking just doesn't make any sense, it's still predicting the output of the conversation, not introspecting.

Define "introspection" in an algorithmic sense. Is introspection looking at one's memories and analyzing current events based on these memories? Well, then all AI models "introspect". That's how learning works.

[-] Scubus@sh.itjust.works 2 points 8 months ago

LLM's have two phases, the training phase, and deployment phase. During deployment, it is incapable of taking in or "learning" new information. You an tell it things and it may remember them for a short time, but that data is not incorporated into it's weights and biases and is therefore more similar to short term memory.

It can only learn during the training phase, generally when it is pitted against another AI designed to find it's flaws, and mutated based off of it's overall fitness level. I'm other words, it has to mutate to learn. Shut off mutation, and it simply doesn't learn.

It seems likely to me that any LLM that is sent out in deployment would therefore be incapable of sentience, and that involves reacting in novel ways to new experiences. Whereas deployed AI will always behave in the way it's neural network was trained.

Tl;Dr: you can't ask chatGPT to print out it's training data. Even if you ask it multiple times, it was designed to not do that. That sort of limiting factor prevents it from learning, and therefore sentience.

[-] UraniumBlazer@lemm.ee 2 points 8 months ago* (last edited 8 months ago)

Correct. So basically, you are talking about it adjusting its own weights while talking to you. It does this in training but not in deployment. The reason why it doesn't do this in deployment is to prevent bad training data from worsening the quality of the model. All data needs to be vetted before training.

However, if you look at the training phase, it does this as you said. So in short, it doesn't adjust its weights in production because it can't, but because WE have prevented it from doing so.

Now about needing to learn and "mutate" to be sentient in deployment. I don't think that this is necessary for sentience. Take a look at Alzheimer's patients. They remember shit from decades ago while forgetting recent stuff. Are they not sentient? An Alzheimer's patient wouldn't be able to take up a new skill (which requires adjusting of neural weights). It still doesn't make them non sentient, does it?

[-] Scubus@sh.itjust.works 1 points 8 months ago

That's a tough one. Honestly, and I'm probably going to receive hate for this, but my gut isntinct would be that no, they are not sentient in the traditional sense of the word. If you harm them and they can't remember it a moment later, are they really living? Or are they just an echo of the past?

[-] UraniumBlazer@lemm.ee 2 points 8 months ago

This just shows that we have different definitions for sentience. I define sentience as the ability to be self aware and the ability to link senses of external stimuli to the self. Your definition involves short term memory and weight adjustment as well.

However, there is no consensus in the definition of sentience yet for a variety of reasons. Hence, none of our definitions are "wrong". At least not yet.

this post was submitted on 07 Mar 2024
299 points (92.4% liked)

Memes

1175 readers
1 users here now

founded 2 years ago
MODERATORS