Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.
I'm starting to wonder if any of you even know how that shit even works internally, or if you just take what the hype media says at face value. It literally has one purpose and one purpose alone: Determine what the next word is going to be by calculating the probability which word will come after the next. That's it. All it does is try to string a convincing sentence using probabilities. It does not and cannot understand context.
The underlying tech is really cool but a lot of people are grotesquely overselling its capabilities. Not to say a neural network can't eventually obtain consciousness (because ultimately our brains are a union of a bunch of little neural networks working together for a common goal) but it sure as hell isn't going to be an LLM. That's what I meant by sophistry, they're not engaging with the facts, just some nebulous ideal.
I'm with you on LLMs being over hyped although that's already dying down a bit. But regarding your claim that LLMs cannot "understand context", I've recently read an article that shows that LLMs can have an internal world model:
Oh no we are NOT doing this shit again. It's literally autocomplete brought to its logical conclusion, don't bring your stupid sophistry into this.
Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.
I'm starting to wonder if any of you even know how that shit even works internally, or if you just take what the hype media says at face value. It literally has one purpose and one purpose alone: Determine what the next word is going to be by calculating the probability which word will come after the next. That's it. All it does is try to string a convincing sentence using probabilities. It does not and cannot understand context.
The underlying tech is really cool but a lot of people are grotesquely overselling its capabilities. Not to say a neural network can't eventually obtain consciousness (because ultimately our brains are a union of a bunch of little neural networks working together for a common goal) but it sure as hell isn't going to be an LLM. That's what I meant by sophistry, they're not engaging with the facts, just some nebulous ideal.
I'm with you on LLMs being over hyped although that's already dying down a bit. But regarding your claim that LLMs cannot "understand context", I've recently read an article that shows that LLMs can have an internal world model:
https://thegradient.pub/othello/
Depending on your definition of "understanding" that seems to be an indicator of being more than a pure "stochastic parrot"