In this case, it decided that being helpful to the company was more important than its honesty.
It did no such thing. It doesn't know what those things are. "LLM AI" is not a conscious thinking being and treating it like it is will end badly. Giving an LLM any responsibility to act on your behalf automatically is a crazy stupid idea at this point in time. There needs to be a lot more testing and learning about how to properly train models for more reliable outcomes.
It's almost impressive how quickly humans seem to accept something as "human" just because it can form coherent sentences.