Personally, my money's on them being thoroughly lost in the AI safety sauce - the idea of AI going rogue has been a staple in pop culture for quite a long time (TV Tropes lists a lot of examples), and the relentless anthropomorphisation of LLMs makes it pretty easy to frame whatever fuck-ups they make as a sentient AI pulling malicious shit.
And given man's long and storied history of manipulating and misleading their fellow man, I can see plenty of opportunity for fuck-ups baked directly into your average LLM's training data.
To paraphrase one of South Park's more infamous jokes (because I'm not giving this shit any dignity):
Looks like we're getting a lotta sneer material here! The topic is "People Who Annoy Scoot".