222
Reasoning failures highlighted by Apple research on LLMs
(appleinsider.com)
This is a most excellent place for technology news and articles.
I still fail to see how people expect LLMs to reason. It's like expecting a slice of pizza to reason. That's just not what it does.
Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I'm leaving a little possibility that the slice of pizza will outreason GPT 4.
LLMs keep getting better at imitating humans thus for those who don't know how the technology works, it'll seem just like it thinks for itself.
This text provides a rather good analogy between people who think that LLMs reason and people who believe in mentalists.
That's a great article.