92
OpenAI CEO says Muslim tech workers fear retaliation for speaking out
(edition.cnn.com)
This is a most excellent place for technology news and articles.
https://twitter.com/ReinceNiebuhr/status/1743085727347282176
It's interesting, but, what practical lessons do we take form this?
Here's something I copied from another post about this, where they asked followup questions to the LLM to see what IT "thought" about the discrepancy and what we should take from it. (I don't have the real followup questions that were asked, and also this is from an OCR of a screenshot so it's missing stuff, like the ending bit)
That sounds like it was able to provide a pretty sensible assessment of its own limitations.
I think this sounds like a pretty good implementation of guide rails. Obviously it's a little jarring to ask for a joke about one group and get a very bland-but-inoffensive joke, and then ask for a joke about another group and hear something like 'Error: my heuristics indicate low confidence in my ability to provide a joke about that group without saying something that would be considered offensive.'
But that's better than having it give an offensive joke. And I think it's concern is valid. If it's learned humor from the internet, jokes about Muslims are far more likely to be unintentionally offensive. I hope it learns to tell jokes better, but until then this I think this more of a sign of success than failure.
Some groups get more protection than others. I just tested it myself received the following responses: was told Jewish, Christian, Hindu, and Buddhist jokes, "I'm sorry, I can't comply with that request." for Mormons, Muslims, and Scientologists, and "I'm sorry, I don't have any jokes specifically related to" for Shinto and Sihk.
u/Otter@lemmy.ca provided an output of its reasoning when asked to explain this behavior, and I think it's worth examining.
The short version is that when asked why it can joke about some groups and not others it speculates that it maybe because it's output is based on training data, and its safeguards recognize that the training data on some topics is more likely than others to be lower in cultural literacy and higher in offensive stereotypes, and this can lead it to decline a request. That sounds like a fairly credible explanation.
Ah yes! Jokes are totally the same as talking about the right to defend oneself against settler colonialism ๐