72
you are viewing a single comment's thread
view the rest of the comments
[-] bushvin@lemmy.world -1 points 5 months ago

Oh cool, implementing mediocre algorithms. What could possibly go wrong?

[-] warmaster@lemmy.world 7 points 5 months ago* (last edited 5 months ago)

Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.

Are all Ollama-supported algos mediocre? Which ones would be better?

this post was submitted on 05 Jun 2024
72 points (91.9% liked)

homeassistant

11921 readers
24 users here now

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. Available for free at home-assistant.io

founded 1 year ago
MODERATORS