Custom Integration: Ollama Conversation (Local AI Agent)

Thanks for this addon. I’m using it to enrich my was-machine mastodon account.

I use old hardware so i needed to change the hardcoded timeout of 60 sec.
at /config/custom_components/ollama_conversation/const.py

TS. Is it possible to alter the systemprompt with a service?

This model claims to work with a different integration to give ollama control Easiest way to Control your Smart Home with AI: Ollama + Home Assistant

Unfortunately it doesn’t work. I wish there was a way to integrate Ollama with Home Assistant and make it control the house in an intelligent fashion but it just doesn’t seem to exist at the moment :frowning:

I have a spare RTX 3080 that I’ve been running for a few weeks now trying to integrate both Home Assistant and Ollama (and even tried with LocalAI) but no joy.

It works flawlessly and controls my devices with OpenAi but I was inclined to make this more privacy focused and keep out the need to reach for an online service.

The project litellm provides an OpenAI compatible proxy for Ollama. This way Ollama can be used with OpenAI API calls.

1 Like

That’s interesting! Thank you for sharing! Need to find how to configure this with HA and then relay stuff to Ollama and back :slight_smile:

A recent release (0.1.25) of Ollama added an OpenAPI compatible interface, for at least some subset.

I made yet a other integration

v1.0.3-beta.1

The most requested feature for all AI integrations is to control the smart home.
Given how difficult it currently is to make different LLMs behave how you want, believe me I have tried to come up with a way to force all LLMs to output properly formatted service calls…

This release allows HA to control the devices as if using the default built-in intent agent, while still allowing you to chat to a LLM. I feel this is a good middle ground for now, feedback and contributions always welcome :wink:

2 Likes

So far I separate the ollama AI from Home Assistant because there is no real solution to use ollama to control devices at home.

Those who want to go this route they can install a nice web UI

and if they have good GPU they can load home assistant advisor AI too

you need a bit more expensive GPU for this, a 24GB VRAM sized 4090

The blog post for the 2024.04 release mentions an Ollama integration, but there are nö docs yet 2024.4 Beta: Organize all the things! - Home Assistant

That’s really good news, as ollama is really the best and easiest way to have local AI and its API is working perfectly if you test from command line:

curl http://192.168.0.100:11434/api/generate -d ‘{ “model”: “llama2:13b”, “prompt”: “What is water made of?” }’

We need a reliable integration in HA.

I’m using the Llama2:13b model and the responses are great via chat…but horrific for Home Assistant!

Any ideas what could be wrong?
Is there a better model to use?


I decided to see what happens sending it a command:

Do I need to modify the prompt template?

For context, Home Assistant’s conversation agent works fine:

I know it won’t help, but it’s basically the same for me. Tried a couple of different models, even llama like in the live stream, home3b (fixte) and mixtral 8x7b, but it was just garbage all over.
I’m not even sure if the prompt is working correctly since I didn’t find a way to check the output of the template.

i installed ollama in docker but I found out that my i3 2nd gen was too weak to run ai locally although I have p-1000 nvidia and 30 GB of ram. It can run some chatboot but slowly. Some other models is basically impossible to run it on my comp that i use as a server for ha.
That’s my 2 cents.

I was able to get a different integration to leverage LocalAI and a non-Meta / Llama model successfully but the performance was dreadful.

The performance issue was with LocalAI and that particular model…I just cannot find something else that works. I believe the issue is the prompting, but I have not been willing to modify it.

Is it possible to use the output of the Ollama conversation agent to be sent to a media player?

I have local stt and tts setup with Whisper and Piper and can create an automation to send tts to a media player.

Can the same be done with Ollama?

I’m guessing it’s not possible at this stage as it seems that Ollama cannot ‘control’ anything.

1 Like

I used this guide and have an LLM setup. The Text Generation Webui works in concert with the Llama Conversation Integration in HA. I tried to use the prompt from FutureProofHomes localai.io install with modest success . Text Generation Webui gives the opportunity to load different models to experiment with. I have only been at this for a day or so. If any one alse has installed this I woud love to see what model you installed, and what your prompt looks like. The default prompt in Llama Conversation is useless.

1 Like

Ollama says that it is controlling devices but the states do not actually change…which is WEIRD.

During my experimentation, I noticed some false reporting. Hallucinations I guess.

Any plans to include The new mistral?

https://www.reddit.com/r/LocalLLaMA/comments/1cy61iw/mistral7b_v03_has_been_released/