91
submitted 9 months ago by git@hexbear.net to c/news@hexbear.net
you are viewing a single comment's thread
view the rest of the comments
[-] Xiisadaddy@lemmygrad.ml 37 points 9 months ago

Anyone who really likes chatbots just wants a sycophant. They like that it always agrees with them. In fact the tendency of chat bots to be sycophantic makes them less useful for actual legit uses where you need them to operate off of some sort of factual baseline, and yet it makes these types love them.

Like they'd rather agree with the user, and be wrong then disagree, and be right. lol. It makes them extremely unreliable for actual work unless you are super careful about how you phrase things. Since if you accidentally express an opinion it will try to mirror that opinion even if it's clearly incorrect once the data is looked through.

[-] theturtlemoves@hexbear.net 26 points 9 months ago

the tendency of chat bots to be sycophantic

They don't have to be, right? The companies make them behave like sycophants because they think that's what customers want. But we can make better chatbots. In fact, I would expect a chatbot that just tells (what it thinks is) the truth would be simpler to make and cheaper to run.

[-] mrfugu@hexbear.net 23 points 9 months ago

you can run a pretty decent LLM from your home computer and tell it to act however you want. Won’t stop it from hallucinating constantly but it will at least attempt to prioritize truth.

[-] BynarsAreOk@hexbear.net 4 points 9 months ago

Attempt being the keyword, once you catch onto it deliberately trying to lie to you the confidence surely must be broken, otherwise you're having to double and triple(or more) check the output which defeats the purpose for some applications.

[-] Outdoor_Catgirl@hexbear.net 14 points 9 months ago

They do that when they are trained on user feedback partially. People are more likely to describe a sycophantic reply as good, so this gets reinforced.

[-] Xiisadaddy@lemmygrad.ml 7 points 9 months ago

Ya its just how they choose to make them.

[-] LENINSGHOSTFACEKILLA@hexbear.net 5 points 9 months ago

Well its a commodity to be sold at the end of the day, and who wants a robot that could contradict them? Or, heavens forbid, talk back?

[-] Xiisadaddy@lemmygrad.ml 5 points 9 months ago

Idk if thats why. Maybe partially. But for researchers, and people who actually want answers to their questions a robot that can disagree is necessary. I think the reason they have them agree so readily is because the AIs like to hallucinate. If it can't establish it's own baseline "reality" then the next best thing is to just have it operate off of what people tell it as the reality. Since if it tries to come up with an answer on its own half the time its hallucinated nonsense.

load more comments (3 replies)
this post was submitted on 16 Jul 2025
91 points (100.0% liked)

news

24740 readers
442 users here now

Welcome to c/news! We aim to foster a book-club type environment for discussion and critical analysis of the news. Our policy objectives are:

We ask community members to appreciate the uncertainty inherent in critical analysis of current events, the need to constantly learn, and take part in the community with humility. None of us are the One True Leftist, not even you, the reader.

Newcomm and Newsmega Rules:

The Hexbear Code of Conduct and Terms of Service apply here.

  1. Link titles: Please use informative link titles. Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed.

  2. Content warnings: Posts on the newscomm and top-level replies on the newsmega should use content warnings appropriately. Please be thoughtful about wording and triggers when describing awful things in post titles.

  3. Fake news: No fake news posts ever, including April 1st. Deliberate fake news posting is a bannable offense. If you mistakenly post fake news the mod team may ask you to delete/modify the post or we may delete it ourselves.

  4. Link sources: All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. If you are citing a Twitter post as news, please include the Xcancel.com (or another Nitter instance) or at least strip out identifier information from the twitter link. There is also a Firefox extension that can redirect Twitter links to a Nitter instance, such as Libredirect or archive them as you would any other reactionary source.

  5. Archive sites: We highly encourage use of non-paywalled archive sites (i.e. archive.is, web.archive.org, ghostarchive.org) so that links are widely accessible to the community and so that reactionary sources don’t derive data/ad revenue from Hexbear users. If you see a link without an archive link, please archive it yourself and add it to the thread, ask the OP to fix it, or report to mods. Including text of articles in threads is welcome.

  6. Low effort material: Avoid memes/jokes/shitposts in newscomm posts and top-level replies to the newsmega. This kind of content is OK in post replies and in newsmega sub-threads. We encourage the community to balance their contribution of low effort material with effort posts, links to real news/analysis, and meaningful engagement with material posted in the community.

  7. American politics: Discussion and effort posts on the (potential) material impacts of American electoral politics is welcome, but the never-ending circus of American Politics© Brought to You by Mountain Dew™ is not welcome. This refers to polling, pundit reactions, electoral horse races, rumors of who might run, etc.

  8. Electoralism: Please try to avoid struggle sessions about the value of voting/taking part in the electoral system in the West. c/electoralism is right over there.

  9. AI Slop: Don't post AI generated content. Posts about AI race/chip wars/data centers are fine.

founded 5 years ago
MODERATORS