129
Major shifts at OpenAI spark skepticism about impending AGI timelines
(arstechnica.com)
This is a most excellent place for technology news and articles.
LLMs will not give us AGI. This is obvious to anyone who knows how they work.
Maybe it can. If you find a way to port everything to text by hooking in different models, the LLM might be able to reason about everything you throw at it. Who even defines how AGI should be implemented?
The LLM is just trying to produce output text that resembles the patterns it saw in the training set. There's no "reasoning" involved.