501
We have to stop ignoring AI’s hallucination problem
(www.theverge.com)
This is a most excellent place for technology news and articles.
We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.