I don't know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
- Generate useful search queries.
- Dig through provided text to determine what it contains.
- Summarize text.
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
This is already a thing, there are a myriad of LLM chat interfaces where you can either connecting to models you are locally running or connect to APIs of providers. "Open WebUI", "librechat" and "Big-AGI" are web interfaces. On desktop you have things like jan.ai and a a lot more.