LLM just returns "Started" when running a script

I’ve exposed a script to Voice Assist. It understands requests to run the script, and it runs successfully. But the LLM just responds with “Started”. I’ve tried three different ways to communicate the result: response_variable, a stop message, and set_conversation_response.

A simple example just to demonstrate:

script:
  flip_a_coin:
    alias: "Flip a Coin"
    description: "Simulates a coin toss and returns the result."
    mode: single
    fields: {}
    variables:
      coin_result: "{{ ['Heads', 'Tails'] | random }}"
      script_output: "{{ { 'coin_result' : coin_result } }}"
    sequence:
      - service: conversation.set_conversation_response
        data:
          response: "The coin landed on {{ coin_result }}!"
      - stop: "It's {{ coin_result }}!"
        response_variable: "script_output"

In thread Run script with LLM and say back the return value - Configuration / Voice Assistant - Home Assistant Community the final post (from March) says that it now works in the then-latest HA release. Am I doing something wrong, or is this not yet supposed to work?

In case it matters: local LLM hosted by Ollama running Gemma3:1b.

Thanks!

It very much matters. I didn’t think Gemma3 was a tool user.

Ollama.com agrees.

Try something like llama3.2 to make sure tools works

If it does then your model choice is the issue (im almost positive - I don’t even have Gemma3 in my farm because of this. No tools == useless to me)

1 Like

Thanks for the suggestion. I get exactly the same response (just ‘Started’) with https://ollama.com/library/llama3.2:3b, which claims tool use. In the HA Ollama integration I have “HA Assist” checked next to “Control Home Assistant”.

Can anyone get a better outcome with the script I show above?

Raw debug output:

stage: done
run:
  pipeline: 01k3z0z51bwz0sjc853jk24s0w
  language: en
  conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
  runner_data:
    stt_binary_handler_id: null
    timeout: 300
events:
  - type: run-start
    data:
      pipeline: 01k3z0z51bwz0sjc853jk24s0w
      language: en
      conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
      runner_data:
        stt_binary_handler_id: null
        timeout: 300
    timestamp: "2025-08-31T02:58:24.651309+00:00"
  - type: intent-start
    data:
      engine: conversation.ollama_conversation
      language: en-US
      intent_input: flip a coin
      conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
      device_id: null
      prefer_local_intents: true
    timestamp: "2025-08-31T02:58:24.651332+00:00"
  - type: intent-end
    data:
      processed_locally: true
      intent_output:
        response:
          speech:
            plain:
              speech: Started
              extra_data: null
          card: {}
          language: en-US
          response_type: action_done
          data:
            targets: []
            success:
              - name: Flip a Coin
                type: entity
                id: script.flip_a_coin
            failed: []
        conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
        continue_conversation: false
    timestamp: "2025-08-31T02:58:24.655615+00:00"
  - type: run-end
    data: null
    timestamp: "2025-08-31T02:58:24.655666+00:00"
intent:
  engine: conversation.ollama_conversation
  language: en-US
  intent_input: flip a coin
  conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
  device_id: null
  prefer_local_intents: true
  done: true
  processed_locally: true
  intent_output:
    response:
      speech:
        plain:
          speech: Started
          extra_data: null
      card: {}
      language: en-US
      response_type: action_done
      data:
        targets: []
        success:
          - name: Flip a Coin
            type: entity
            id: script.flip_a_coin
        failed: []
    conversation_id: 01K3Z0ZJRB88RGHJZM6JBVP13Y
    continue_conversation: false

I did a little research, and first of all, I can not find an action conversation.set_conversation_response so I think this is where the error is coming from.

There is somewhat of a standalone action set_conversation_response, but from what I can tell it only is to be used in a sentence trigger based automation.

Here is a Feature Request to make the set_conversation_response supported in a script (which seems to mean it isn’t currently supported in a standalone script).

I think the general problem is that the set_conversation_response has to be within the context of a conversation agent, i.e. the set_conversation_response has to know which conversation agent to hand off to, and I suspect that the script itself that is being run by a voice assistant nevertheless has no idea what conversation agent (if any) it is running with.