The challenge is that AI for a video game (even one fixed game) is very problem specific and there's no generalized approach/kit for developing AI for games. So while there's research showing AI can play games, it's involved lots of iteration and AI expertise. Thats obviously a large barrier for any video game and that doesn't even touch the compute requirements.
There's also the problem of making AI players fun. Too easy and they're boring, too hard and they're frustrating. Expert level AI can perform at expert level, which wouldn't be fun for the average player. Striking the right difficulty balance isn't easy or obvious.
Historically, AI has found and used exploits. Before OpenAI was known for chatgpt, they did a lot of work in reinforcement learning (often deployed in game-like scenarios). One of the more mainstream training strategies (pioneered at OpenAI) played sonic and would exploit bugs in the game, for example.
The compute used for these strategies are pretty high though. Even crafting a diamond in Minecraft can require playing for hundreds of millions of steps, and even then, AI might not constantly reach their goal. Theres still interesting work in the space, but sadly LLMs have sucked up a lot of the R&D resources.