Control home assistant with AI

My mini setup to control devices and entities at my home.

I have a pc on my bench with an 3080 GPU runnning ubuntu and home assistant, ollama and many other stuff on docker. Some kind of a home serverish machine I tried :slight_smile:

In the demo I used Extended OpenAI Conversations. But the same is working with ollama on my local machine with HomeLLM and ollama integrations in home assistant.

For the model I’m using GPT-4o. On local Ollama I’m using the models fixt/home-3b and llama3

This amazing guy has a complete video about setting up home assistant with homeLLM using ollama and fixt/home-3b

And this amazing guy has some crazy videos about using Extended openAI conversations and AI with home assistant.

Some thoughts;

llama3 and fixt/home-3b is working fine, but since I have a switch to turn on my front and main lights sometimes they fail to parse the call correct. Because they try to turn on a “light” entity. but its actually a switch with a name of bridge front light.

All of them seem to fail to turn my servo tho. Probably my setup is wrong somehow.

GPT-4o seems to get it most times but not every time. Tho Extended OpenAI Conversations seems to be slightly better with taking and understanding actions in general. And a bit more stable.

I’m going to try to compare them more deeply if I get the time.

1 Like

Cool! Also take a look at the Functionary LLMs. You can run them on a llama-cpp-python server with the OpenAI API (so you can use Extended OpenAI). Or on a vLLM server with OpenAI API.

I’ve read about this model but I couldn’t wrap my head around it. I’m fairly new to action/function etc.

But definitely going to try this out.

I have written an installation guide for building your own Docker image of llama-cpp-python that works with the Functionary LLM (There were some issues before, but might be fixed now). If you are running Ubuntu 22.04, you can also quickly try this:

docker run -p 8000:8000 -e USE_MLOCK=0 -e HF_MODEL_REPO_ID=meetkai/functionary-small-v2.4-GGUF -e MODEL=functionary-small-v2.4.Q4_0.gguf -e HF_PRETRAINED_MODEL_NAME_OR_PATH=meetkai/functionary-small-v2.4-GGUF -e N_GPU_LAYERS=33 -e CHAT_FORMAT=functionary-v2 -e N_CTX=4092 -e N_BATCH=192 -e N_THREADS=6 bramnh/llama-cpp-python:latest
1 Like