494
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
(arstechnica.com)
This is a most excellent place for technology news and articles.
People working with these technologies have known this for quite awhile. It's nice of Apple's researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren't thinking of the security implications
Security implications?