501
We have to stop ignoring AI’s hallucination problem
(www.theverge.com)
This is a most excellent place for technology news and articles.
Exactly. The big problem with LLMs is that they're so good at mimicking understanding that people forget that they don't actually have understanding of anything beyond language itself.
The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.
Like in your example, an LLM doesn't know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that's all you need.