Unfortunately it doesn’t work. I wish there was a way to integrate Ollama with Home Assistant and make it control the house in an intelligent fashion but it just doesn’t seem to exist at the moment
I have a spare RTX 3080 that I’ve been running for a few weeks now trying to integrate both Home Assistant and Ollama (and even tried with LocalAI) but no joy.
It works flawlessly and controls my devices with OpenAi but I was inclined to make this more privacy focused and keep out the need to reach for an online service.
The most requested feature for all AI integrations is to control the smart home.
Given how difficult it currently is to make different LLMs behave how you want, believe me I have tried to come up with a way to force all LLMs to output properly formatted service calls…
This release allows HA to control the devices as if using the default built-in intent agent, while still allowing you to chat to a LLM. I feel this is a good middle ground for now, feedback and contributions always welcome
I know it won’t help, but it’s basically the same for me. Tried a couple of different models, even llama like in the live stream, home3b (fixte) and mixtral 8x7b, but it was just garbage all over.
I’m not even sure if the prompt is working correctly since I didn’t find a way to check the output of the template.
i installed ollama in docker but I found out that my i3 2nd gen was too weak to run ai locally although I have p-1000 nvidia and 30 GB of ram. It can run some chatboot but slowly. Some other models is basically impossible to run it on my comp that i use as a server for ha.
That’s my 2 cents.
I was able to get a different integration to leverage LocalAI and a non-Meta / Llama model successfully but the performance was dreadful.
The performance issue was with LocalAI and that particular model…I just cannot find something else that works. I believe the issue is the prompting, but I have not been willing to modify it.
I used this guide and have an LLM setup. The Text Generation Webui works in concert with the Llama Conversation Integration in HA. I tried to use the prompt from FutureProofHomes localai.io install with modest success . Text Generation Webui gives the opportunity to load different models to experiment with. I have only been at this for a day or so. If any one alse has installed this I woud love to see what model you installed, and what your prompt looks like. The default prompt in Llama Conversation is useless.