Hey,
a couple of days ago I asked the voice assistant to turn up the light in the living room by 20%. This wasn’t processed locally through an intent, but instead by the LLM (I’m using Gemini 2.5 Flash via OpenRouter).
However, the light didn’t get brighter, so I checked the debugger of the conversation and couldn’t find any information about the state or brightness of the light. There is just a list of all exposed entities and their areas.
I tested other values, but the results are always more or less random. It seems that the LLM doesn’t know about the brightness of the light.
My first question: is there a way to see all the information that is sent to the LLM via the system context prompt?
Since I’m testing with n8n and the Webhook Conversation integration, I made the same request to my local n8n workflow and checked the incoming system prompt in n8n. In this system prompt there is more information for each exposed entity, but for my light only the current state “on” is included.
I’m using a lot of Zigbee devices and some of the lights are dimmable, so they have a brightness attribute.
Am I doing something wrong that prevents the brightness attribute from being added to the system context prompt, or is anyone else experiencing this issue?
The LLM needs to call the get_live_context tool to retrieve the current status of entities.
The Assist service doesn’t provide these into the main system prompt as they are not static, and can change regularly. When something changes in your system prompt, this breaks the prompt cache at that point of the prompt and forces all subsequent sections of the prompt to be re-processed - including the very large tooling definitions that are typically inserted AFTER the system prompt itself by the models template when it compiles the request to be processed.
Basically, having dynamic content in your prompt results in a large bump in TTFT (time to first token). The con to removing it though is that your model doesn’t know the status of anything until it calls a tool to retrieve it. They did this with the date/time as well, which was previously in the system prompt, and extracted it out to be accessed via a tool.
Try giving your model directives in its prompt to call the live context tool first in cases where you are asking it to adjust something by a certain percent.
I am having the same issue. My devices are also Zigbee and when I am using LLM, anytime I ask voice assistant to dim or brighten any particular lights, it says it can’t find the lights in the room I am talking about. It also mention the lights in a room I am not even talking about and adjusting those lights.
If I go to “start conversation” for this Voice Assistant and type in the same commands I am speaking, it adjusts the lights correctly.
For example: If I speak “Set DIning room lights to 50%”, it starts blabbing about random messages each of them indicating no lights of that name in the room or in the living room. If I go to the Start Conversation and type “Set Dining roooms lights to 50%”, it sets the lights to 50% and replies “brightness set” in the window.
If I change the voice assistant Conversation Agent from Ollama to Home Assistant, and once again Speak “set dining room lights to 50%”, everything works.
I wanted to add one more thing, with the LLM model, if I ask it to turn ON or turn OFF those same lights, it does it. It just will not do the dimming.
Based on your description, Assist is servicing the response correctly but your llm is not.
Assist (local) is a simple word matcher. That’s great if you get the exact set of words. But the llm can overcome the understanding gap… If it has good instructions.
That means as skittle said. It (the llm) did not understand the context. It doesn’t just automagically pull get live context when it doesn’t understand. In ops case you can add a prompt line to tell the llm. When you need (insert thing here) data use tool - GetLiveContext
Use the voice debug tool to compare the two specifically looking at what the llm version thinks it was supposed to do… But in short you need WAY better prompts for the llm
I am not sure what it automagically does or doesn’t do which is why I am posting. Using the voice debugger, when I ask it to turn the same lights on/off, it processes that locally. How would I get it to process the dim/brighten function locally if llm doesn’t understand? How can I use the debugger so that I can see what it is processing when I speak the commands? I have only played with HA for a week so kind of new to it.
Relative dimming is locally not available. There is no intent for that. That’s why I was trying to do it with the LLM. But sadly it seems not to be included in the static context as well. Sure the LLM can ask for the live context but this is a call which is often slow (in my environment) .
I was hoping that it’s just an issue in my configuration cause the current brightness of a light is not more or less static as the state of the light, isn’t it?
Thanks for all your replies. So I know I have to tell the LLM that it has to make the live context call first.
Yes that’s right but HA has already an intent for that. It’s the set light intent. You just do need the LLM to do the relative part of it. I know LLMs are bad in calculations but I was hoping "increase the light by 20%) is doable. But without the information about the current brightness it’s nothing.
Thanks for the link, I will take a look at it. I read a lot of your topics. What you do is incredible.