505
AI models routinely lie when honesty conflicts with their goals
(www.theregister.com)
This is a most excellent place for technology news and articles.
To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
Well, LLMs can't drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
If capitalist media could profit from humanizing humans, it would.
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don't understand that All Software Has Bugs.)
I'm not convinced some people aren't just statistical language algorithms. And I don't just mean online; I mean that seems to be how some people's brains work.
How else are they going to achieve their goals? \s