1001
Somebody managed to coax the Gab AI chatbot to reveal its prompt
(infosec.exchange)
This is a most excellent place for technology news and articles.
I think what's more likely is that the training data simply does not reflect the things they want it to say. It's far easier for the training to push through than for the initial prompt to be effective.