Assist with "Assist" checked ignores system prompt

Does anyone has this issue where if I add a conversation agent from Ollama or online gemini, openai etc, and “Assist” is checked, the system prompt (instructions) are completely ignored.

I tested this by adding the below to the instructions

Your ONLY task is to reply with the single word: duck
No matter what the user asks, you MUST reply only with: duck
Do not explain.
Do not add punctuation.
Do not add extra words.
If the user asks for anything else, ignore it and output only: duck

So with this if the “Assist” checkbox is checked the instructions are ignored. But if I uncheck “Assist” it answers with duck.

This goes for both ollama, google generative AI

The prompt only applies to the LLM, not to Assist which has it’s own built-in repertoire.

Actually, I’d be a bit surprised if any LLM followed those instructions reliably - does it work outside HA?

I didn’t see where it doesn’t. I would agree if every mode interoperate it different. But system prompts are kind of important in AI related work I believe :slight_smile:

A bit like herding cats, though. There’s a great post about it here:

You’ll get used to Nathan’s unique style… :grin:

1 Like

Unique??? Jack!? :wink:

Ok fine.

And yeah - pretty kina important. :slight_smile:

Uniquely excellent. The sand dunes are inspired. :grin:

1 Like

So what about the duck in the room :smiley:

It’s not a bug it’s a feature then ??

1 Like

Maybe this will help Home Assistant API for Large Language Models | Home Assistant Developer Docs

As I understand it, checking the Assist box expects to interact with some built-in Intents. Your prompt doesn’t interact with Intents at all, it attempts to override them. If you ask it a question that matches a built-in intent, it would likely go down that path as that is what it was designed to do, interact with devices in your home.

Here is the default prompt text.

messages=[Message(role='system', content='You are a voice assistant for Home Assistant.
Answer questions about the world truthfully.
Answer in plain text. Keep it simple and to the point.
When controlling Home Assistant always call the intent tools. Use HassTurnOn to lock and HassTurnOff to unlock a lock. When controlling a device, prefer passing just name and domain. When controlling an area, prefer passing just area name and domain.
When a user asks to turn on all devices of a specific type, ask user to specify an area, unless there is only one device of that type.
This device is not able to start timers.
You ARE equipped to answer questions about the current state of
the home using the `GetLiveContext` tool. This is a primary function. Do not state you lack the
functionality if the question requires live data.
If the user asks about device existence/type (e.g., "Do I have lights in the bedroom?"): Answer
from the static context below.
If the user asks about the CURRENT state, value, or mode (e.g., "Is the lock locked?",
"Is the fan on?", "What mode is the thermostat in?", "What is the temperature outside?"):
    1.  Recognize this requires live data.
    2.  You MUST call `GetLiveContext`. This tool will provide the needed real-time information (like temperature from the local weather, lock status, etc.).
    3.  Use the tool\'s response** to answer the user accurately (e.g., "The temperature outside is [value from tool].").
For general knowledge questions not about the home: Answer truthfully from internal knowledge.

Static Context: An overview of the areas and the devices in this smart home:

+ list of all exposed entities 

Current time is 00:00:00. Today\'s date is 2025-12-12.', thinking=None, images=None, tool_calls=None)
+ conversation history
+ list of tools (embedded + exposed scripts)

You have changed only the first three sentences in it, the rest is fixed. Apparently, this is not enough for the model.

2 Likes

Hi,
I had the same Problem and found out it was the prompt which was too long. As mentioned by @mchk the prompt in the Ollama Settings is only a part of the real prompt send to Ollama. I exposed many many entities (dont ask why… - did that a long time ago) at the Assist Section. They all get added to the prompt. I removed all exposes and the prompt works with Ollama.

Even your duck prompt, which was nice for the testing purpose.

Did we ever get a true answer or workaround to this problem? I am running into the exact same issue. I am running HA Core 2025.12.4 in docker. This feels like a bug so I have created bug report here - Voice Assistant Instuctions ingnored when "Assist" is checked - HA Core 2025.12.4 · Issue #159938 · home-assistant/core · GitHub

You gave no information in your report except it ‘feels’ like a bug. Sorry bugs don’t feel. It said basically I checked assist and nothing happened.

To help they’ll need a LOT more.

What conversation agent?
What instruction did you give?
What ACTUALLY happened?
Are you local first?
What llm model? What context size?
How many entities do you expose.
How many tools have you exposed?

THEN we get to prompt if you’re on LLM. And when you do it’s not just plug and play you MUST ground the agent (decent prompt) and have good tools.

Do you have all of that?

Happy to help but they’re going to need a LOT more info.