290
AI Search Engines Are Confidently Wrong Too Often.
(www.seroundtable.com)
This is a most excellent place for technology news and articles.
This does seem to be exactly the problem. It is solvable, but I haven't seen any that do it. They should be able to calculate a confidence value based on number of corresponding sources, quality ranking of sources, and how much interpolation of data is being done vs. Straightforward regurgitation of facts.
I've been saying this for a while. They need to train it to be able to say "I don't know". They need to add questions to the dataset without enough information to solve so that it can understand what is/isn't facts vs hallucinating
I haven’t seen any evidence that this is solvable. You can feed in more training data, but that doesn’t mean generative AI technology is capable of using that in the way you describe.