view the rest of the comments
news
Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.
Rules:
-- PLEASE KEEP POST TITLES INFORMATIVE --
-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --
-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --
-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today . Twitter screenshots still need to be sourced or they will be removed --
-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--
-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--
-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --
-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --
They don't have to be, right? The companies make them behave like sycophants because they think that's what customers want. But we can make better chatbots. In fact, I would expect a chatbot that just tells (what it thinks is) the truth would be simpler to make and cheaper to run.
you can run a pretty decent LLM from your home computer and tell it to act however you want. Won’t stop it from hallucinating constantly but it will at least attempt to prioritize truth.
Attempt being the keyword, once you catch onto it deliberately trying to lie to you the confidence surely must be broken, otherwise you're having to double and triple(or more) check the output which defeats the purpose for some applications.
They do that when they are trained on user feedback partially. People are more likely to describe a sycophantic reply as good, so this gets reinforced.
Ya its just how they choose to make them.
Well its a commodity to be sold at the end of the day, and who wants a robot that could contradict them? Or, heavens forbid, talk back?
Idk if thats why. Maybe partially. But for researchers, and people who actually want answers to their questions a robot that can disagree is necessary. I think the reason they have them agree so readily is because the AIs like to hallucinate. If it can't establish it's own baseline "reality" then the next best thing is to just have it operate off of what people tell it as the reality. Since if it tries to come up with an answer on its own half the time its hallucinated nonsense.