Don't get me wrong though... throwing an LLM at it would be a lot easier and faster. Just a mind boggling use of resources for a task that could probably be done more efficiently :D
Setting this up with Apache Solr and a suitable search frontend runs a high risk of becoming an abandoned side project itself^^
Yeah LLM seems like the go to solution. And the best one.
And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.
Don't get me wrong though... throwing an LLM at it would be a lot easier and faster. Just a mind boggling use of resources for a task that could probably be done more efficiently :D
Setting this up with Apache Solr and a suitable search frontend runs a high risk of becoming an abandoned side project itself^^
Yeah LLM seems like the go to solution. And the best one. And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.