I'm not an expert, but it sounds like you want an embedding+vector database. This essentially extracts the part of an LLM that "understands" (loaded term, note the quotes) the text you put in, and then does a lookup directly on that "understanding", so it's very good at finding alternate phrasings or slightly differing questions.
There's no actual text generation involved, and no need to retrain anything when adding new questions.
OpenSearch has an implementation (which I learned about just now while writing this comment and thus cannot vouch for); you could start there.
Yeah, even though I have a bit of background I can't really make heads or tails of that OpenSearch doc at a glance, it's dense stuff.
In my experience knowing the keywords to stick in a search engine is often half the battle; there are plenty of resources out there on "vector databases". "Semantic search" from the lede of the OpenSearch doc might be another good one to have around.
Feel free to ask me any other questions and I can try to answer to the best of my abilities, though again, not an expert and honestly I've never actually used these myself beyond toy examples.