Help: Local AI Voice Assistant - Intent error

Hi,

I am trying to run ollama (deepseek r1 14b) on my workstation and try to integrate it with home assistant which is hosted on a mini PC.

I have managed to create the voice assistant pipeline, but I am getting error “Unexpected error during intent recognition” every time I am trying to access the deepseek model.

Am I missing any settings setup? My goal is to use it to control my smart home and for local simple queries like (tell me a recepie of X, who was X, what is the weather like on Monday in X, etc.). I also have a voice assistant PE, but first I want to make it work from the browser/app.

Hope you can help.

Thank you!



Could you post your intent as preformatted text (</> in the cogwheel menu), rather than as a screenshot? That way people can copy it to try it out.

Do you get this error when “prefer local control” is turned off?

Do you have an Assist assistance/pipeline instance setup for local only (no LLM CA)? If yes, do you get this same error?

getting the same thing. Not sure whats going on.

Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:1165
integration: Assist pipeline (documentation, issues)
First occurred: March 8, 2025 at 11:50:58 PM (9 occurrences)
Last logged: 12:10:36 AM

Unexpected error during intent recognition
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py”, line 1165, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
…<8 lines>…
)
^
File “/usr/src/homeassistant/homeassistant/components/conversation/agent_manager.py”, line 117, in async_converse
result = await method(conversation_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/src/homeassistant/homeassistant/components/conversation/entity.py”, line 47, in internal_async_process
return await self.async_process(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/src/homeassistant/homeassistant/components/ollama/conversation.py”, line 219, in async_process
return await self._async_handle_message(user_input, chat_log)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/src/homeassistant/homeassistant/components/ollama/conversation.py”, line 276, in _async_handle_message
[
…<4 lines>…
]
File “/usr/src/homeassistant/homeassistant/components/conversation/chat_log.py”, line 277, in async_add_delta_content_stream
async for delta in stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^
…<45 lines>…
self.delta_listener(self, delta) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/src/homeassistant/homeassistant/components/ollama/conversation.py”, line 147, in _transform_stream
async for response in result:
…<18 lines>…
yield chunk
File “/usr/local/lib/python3.13/site-packages/ollama/_client.py”, line 672, in inner
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: registry.ollama.ai/library/deepseek-r1:7b does not support tools (status code: 400)

The model you have chosen to use does not support tool use. Therefore it cnt use assist. That’s what that error says…

Try llama3.2/3.3 8b or qwen 7b or a Phi 4 instead. All of them have been optimized for tools.

1 Like

Thank you for your reply and solution.

llama 3.2 works ok with it. so you are correct, the deepseek model is not currently optimised for HA.

Are you aware of a lift of supported LLMs for Home assistant?

No list. You must look and determine if a model is capable of ‘tool use’… that’s listed on huggingface. Also I’ve found models less than 7b params just don’t cut it for HA.