I don't think we need to go as far as evopsych here... it may just be an artifact of modeling the environment at all - you learn to model other people as part of the environment, you re-use models across people (some people are mean, some people are nice, etc).
Then weather happens, and you got yourself a god of bad weather and a god of good weather, or perhaps a god of all weather who's bipolar.
As far as language goes it also works the other way, we over used these terms in application to computers, to the point that in relation to computers "thinking" no longer means it is actually thinking.
Hmm, maybe too premature - chatgpt has history on by default now, so maybe that's where it got the idea it was a classic puzzle?
With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:
https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1
Could also be randomness.
Select snippet:
I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.