74
Trusting your own judgement on 'AI' is a huge risk
(www.baldurbjarnason.com)
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
And how do you know LLMs can't tell that they are involved in a conversation?
Unless you think there is something non-computational in the human brain, then you must accept that computers are - in theory - capable of thinking. With the right software and sufficiently powerful hardware.
Given that truth (which I think you can only avoid through religion or quantum quackery), you can't just say "it's only maths; it can't be thinking" because we know that maths can think.
Do LLMs "think"? The definition of "think" is wooly enough and we understand them little enough that it's quite an assertion to say that they definitely don't.
It has no memory, for one. What makes you think that it does know its in a conversation?
It has very short term memory in the form of it's token context. Especially with something like Meta's Coconut.
I don't really. Yet. But I also don't think that it is fundamentally impossible for LLMs to think, like you seem to. I also don't think the definition of the word "think" is so narrow that it requires that level of self-awareness. Do you think a mouse is really aware it is a mouse? What about a spider?