Using Llama 3 to control Home Assistant

Hi all, I know that officially (especially after watching the live feed on YouTube last night), that local AI with function calling is not working yet. But 'm wondering, how is this guy then getting it to work?

I now have an almost identical setup to him, where I have a container in Proxmox running with Ollama and Open WebUI, and the container is utilizing a RTX 4070 (12 Gb VRAM), which is blazingly fast for using the Ollama integration in HA, and also for another container running Whisper / Piper.

Even though I get great, and very fast responses when asking about the state of sensors, or just general knowledge questions in HA via Assist, I can’t get it to control any devices, even though I’ve followed his tutorials to the letter.

So I was just wondering if anyone else has Ollama (or another local ai) controlling their HA? Would love to talk to you!!

Are you using the Custom Integration mentioned in the blog post - Home-LLM - or the Official Ollama integration? The official integration cannot control your devices. Home-LLM can. It is installed via HACS.

Yes, have tried that but cannot get it to control devices. Do you have a working setup? Would be really interested to hear what your set up is, and your prompt in HA if so. Also, how many entities do you have exposed to Assist?

OK here’s my setup:

Ollama is running inside an LXC container in Proxmox, with my GPU passed through (GTX 1060 6GB).

I’m currently using the LLM from here, which is fine-tuned to work better with Home-LLM. So to be clear I am not using Llama 3. I tested briefly with Mistral Instruct but my GPU is clearly not performant enough for that one (and I’m using the GPU for other things besides LLMs).

Depending on what model you choose then there is a lot of bespoke configuration you need to do in order to get it to work. I suggest reading through this very closely if you haven’t already. That blog and video seems to be outdated too - now when you configure a model you need to choose ‘Assist’ in the Options in order for the LLM to be able to control your devices. Perhaps it would be better if you posted screenshots of your model config in Home Assistant.

As for the number of exposed devices, I currently have 33, and I think 32 may actually be the max supported? I can get mine to control things reasonably well, but sending it single words and sometimes any sentence that does not involve controlling devices often causes it to give the ‘unexpected error’ message.

1 Like

Great, thanks very much for the reply. I had only followed the video I posted above, and not necessarily the guide you linked to, so I haven’t tried the mistral model yet, but I’ll go through it with a fine toothcomb, and see if I can get it working.

And is there still a limitation of 32 devices? Must admit, I’m hoping to enquire / control a lot more than that (currently I have about 100 entities exposed to Assist).

Anyway, looking forward to getting this work, thanks again.

Hi @cnose I was just wondering if you’d be willing to share the prompt you use for your setup? I can definitely see the power in a decent prompt, but it would be good to have a good example of something that works reasonably well.

Oh and and just as an update, I followed the guide you linked, and I have got it working! Or at least, it mostly turns on / off the correct device, but also noticed it sometimes controls the wrong device. But it’s the furthest I’ve got so far, so thanks again for the pointer.

I’m not sure if this helps you guys at all but I’m working on an integration that could maybe help you guys since it’s working right now.

I should preface that this is for something completely unrelated project wise but the logic you need might be in here.

1 Like

I’d probably stick to llama 2 for production until llama 3 is proven stable. Great to play with it in production, but production needs to be bullet proof.
I plan playing with llama 3 in my sandbox environment though.

1 Like