I don't think you can disconnect how an LLM was trained from how it operates. If you train an LLM to use trigonometry to solve addition problems, I think you will find the LLM will do trigonometry to solve addition problems. If you train an LLM in only Russian, it will speak Russian. I would suggest that regardless of what you train it on it will choose the statistically most likely next token based on its training data.
I would also suggest we don't know the exact training data being used on most LLMs, so as outsiders we can't say one way or another on how the LLM is being trained to do anything. We can try to extrapolate from posts like the one that you linked to how the LLM was trained though. In general if that is how the LLM is coming to its next token, then the training data must be really heavily weighted in that manner.
I would point out I think you might be overly confident in the manner in which it was trained addition. I'm open to being wrong here, but when you say "It was not trained to do trigonometry to solve addition problem", that suggests to me either you know how it was trained, or are making assumptions about how it was trained. I would suggest unless you work at one of these companies, you probably are not privy to their training data. This is not an accusation, I think that is probably a trade secret at this point. And if the idea that there would be nobody training an LLM to do addition in this manner, I invite you to glance the Wikipedia article on addition. Really glance at literally any math topic on Wikipedia. I didn't notice any trigonometry in this entry but I did find the discussion around finding the limits of logarithmic equations in the "Related Operations" section: https://en.m.wikipedia.org/wiki/Addition. They also cite convolution as another way to add in which they jump straight to calculus: https://en.m.wikipedia.org/wiki/Convolution.
This is all to say, I would suggest that we don't know how they're training LLMs. We don't know what that training data is or how it is being used exactly. What we do know is that LLMs work on tokens and weights. The weights and statistical relevance to each of the other tokens depends on the training data, which we don't have access to.
I know this is not the point, but up until this point I've been fairly pedantic and tried to use the correct terminology, so I would point out that technically LLMs have "tensors" not "neurons". I get that tensors are designed to behave like neurons, and this is just me being pedantic. I know what you mean when you say neurons, just wanted to clarify and be consistent. No shade intended.