Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.
I don't think anybody knows how we actually, really learn. I'm not a neuro scientist (I'm a computer scientist specialised in AI) but I don't think the mechanism of learning is that well understood.
AI hype-people will say that it's "like a neural network" but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.
I'm not saying that we can't ever build a machine that can think. You can do some remarkable things with math. I personally don't think our brains have baked in gradient descent, and I don't think neural networks are a lot like brains at all.
The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.