[-] terrific@lemmy.ml 1 points 2 weeks ago

I'm not saying that we can't ever build a machine that can think. You can do some remarkable things with math. I personally don't think our brains have baked in gradient descent, and I don't think neural networks are a lot like brains at all.

The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.

[-] terrific@lemmy.ml 1 points 1 month ago

Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

I don't think anybody knows how we actually, really learn. I'm not a neuro scientist (I'm a computer scientist specialised in AI) but I don't think the mechanism of learning is that well understood.

AI hype-people will say that it's "like a neural network" but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

[-] terrific@lemmy.ml 2 points 1 month ago

I know it's part of the AI jargon, but using the word "learning" to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.

view more: ‹ prev next ›

terrific

joined 1 month ago