Local LLM won’t run commands, just reads <functioncall…

Ok, I’m lost. I really need some help.

The configuration:

HAOS running on bare metal NUC (porcoolpine), i3, 16gb ddr4.
On it, just HAOS, and a couple of mission critical addons: mosquitto, matter server, Plex (media hosted on separate NAS - synology ds920+)

I have a separate, local server, running ollama and open web ui. Beefy hardware: i9 14th gen, 64 bg ddr5, 2x 3090 24gb oc (currently just one, the second is in service). Running bare metal Ubuntu, no proxmox.

I have a bunch of models pulled, currently managed to get the assistant partly running with LLM Model ‘finalend/llama-3.1-storm:8b-q8_0’.

The way it’s currently integrated is:

Local LLM Conversation integration (from HACS).
Attaching screenshots of the config:




I have configured a voice assistant to use this conversation agent, but when I ask it to do whatever with the devices in the home, it just reads the command out loud to me (including symbols) but nothing happens, here is a demo conversation:

  • How can I assist?

  • Turn on the lights in the kitchen

Reply: <functioncall {“name”: “HassTurnOn”, “arguments”: {“area”: “Kitchen”}}>

….

Otherwise it replies normally - how much is 2+2 - “the answer to that is 4”, etc…

I tried fiddling with the settings inside Local LLM Conversation integration between full json tool format and the other 3 options, but nothing seems to work And no combination seems to be working. I’m stumped and I don’t know what to do so please if anybody has any info or help it will be more than helpful…

Thanks to everybody looking forward to hearing from you!!!

I think you need a model that can do function calling too. Not sure if that is the case with the model you use now