It's extremely hard to give a machine a sense of morality without having to manually implement it on every node that constitutes their network. Current LLMs aren't even aware of what they're printing out, let alone understand the moral implications from that.
The day a machine is truly aware of the morality of what they say, in addition to actually understanding it, then we truly have AI. Currently, we have gargantuan statistical models that people glorify into nigh-godhood.
I had a quite literally hottest character I ever came up with: A wizard that liked fire a bit too much for his own good. He was a master of flames, the best from the Monastery he spent decades on. But the more power he gained through the fire, the more and more he lost his own mind. At the time of the campaign, he was in a sort of Limbo. He couldn't remember most of his life, and he couldn't shake off the insatiable desire to spread the flames he encountered. If he spent too long besides a fire, he would start to hear It louder and louder, to the point where he would lose control and be possessed by his flaming desire, which had full memory and access to the spells he no longer remembered, which often resulted in the complete destruction of everything around.
I actually got to play this character, and was a ton of fun with the party I had, but unfortunately the campaign was put on hold indefinitely due to personal matters of the DM.