Hi, I’m looking for some help or pointers with voice assistant performing actions. I set this up nearly a year ago now and i’m sure it was working turning lights on and off etc but i’ve just come back to it and its no longer performing the actions. If i take the STT and TTS out of it for now and just use the start conversation text input whenever i ask it to perform an action like turn on lmap etc i get the following error.
Sorry, there was a problem talking to the backend: JSONDecodeError(‘Expecting value: line 1 column 1 (char 0)’)
sometimes i get the action back but as text like below.
I’m using a local LLM, llama 3.1 7b running on ollama on a seperate host, i’ve tried a couple of different hosts and LLMs and all give me the same issue. I have tried both the Ollama home assistant integration and the Local LLMs integration with the same results. I’m probably doing something stupid but i’m scrathing my head as to why i cant get the basics working I’m sure it was. Could anyone shed any light?
If i ask it anything else it works, like tell me a story about robots etc. I have set assist under control home assistant and i have exposed 9 lights to it.
Then I’m almost 99% sure you have a context problem and your llm doesn’t know what it’s doing.
Your prompt, yes the instructions. The ‘You are a helpful assistant…’ part.
Read the first post in Fridays party
The let’s talk.
Yes I know I’m 190 posts deep. LLM context engineering is a huge subject area. We have to unwind grandma’s box o junk. (read the post and you’ll understand)