Hey Guys,
I got a bunch of problems with Ollama.
I’m using it in a docker-container with a RTX4000 ADA, so I can play with some models.
However, I can also add it into the Ollama HA integration, but it seems to be completely broken:
- When I chose a downloaded model - any is “fine”, and ask it with Assist enabled, I will either get
ollama._types.ResponseError: registry.ollama.ai/library/llama3.2:3b does not support thinking (status code: 400)(In this case I did NOT switch-on thinking in the dialog at all!) orollama._types.ResponseError: registry.ollama.ai/library/llama3.2:3b does not support tools (status code: 400).
But - this model does tools! And so does GPT-OSS:20b. Still Ollama will give this error with eiter “tools” or “thinking”. I tried with Phi4mini, with DeepseekR1 etc. - basically a lot of models that are listed with tools (and some thinking) from Ollama.
To me it seems, that this is not reported correctly.
- When I add another agent within Ollama, with another model, click on chat - lets say I chose now GPT-OSS:20b, the error is (literally)
ollama._types.ResponseError: registry.ollama.ai/library/llama3.2:3b does not support thinking (status code: 400).
Only after a restart of HA it will put the error with the correct model. (Still the error as per #1!)
This also happens when just changing the model in an existing agent.
So what else can I do to get at least far enough to try any model.
I tried HomeLLM (LocalLLMs), but it also does not work - there is usually always and error with the intent recognition.
