129
Major shifts at OpenAI spark skepticism about impending AGI timelines
(arstechnica.com)
This is a most excellent place for technology news and articles.
The LLM is just trying to produce output text that resembles the patterns it saw in the training set. There's no "reasoning" involved.
You're doing that too from day one you were born.
Besides, aren't humans thinking in words too?
Why is it impossible to build a text-based AGI model? Maybe there can be reasoning in between word predictions. Maybe reasoning is just a fancy term for statistics? Maybe floating-point rounding errors are sufficient for making it more than a mere token prediction model.
This poster asked some questions in good faith, I don't understand the downvotes when there's a legitimate contribution to the conversation because that stifles other contributions.
Reddit mentality seeping through...