Exposing HA Scripts to Assist API: Questions on Script Results Access by LLMs

In the recent 2024.4 update, it was announced that HA scripts can now be exposed to the Assist API and consequently to LLMs.

From my understanding, script access was presented as “tools” or a way to guide the LLMs in finding the best action to perform for a given intent. More details in the official blog post.

Do LLMs have access to the script results using response_variable after invoking them?

I have the same question. I have been trying to get the Ollama agent to give a weather forecast based on a custom script response, but it never seems to give a correct answer.

The script itself is called correctly with the correct parameters and also returns the correct response. But the conversation agent either responds with made up forecasts, or with a message that it cannot answer the question.

So I found a way, but not by having the LLM call the script directly, but through an intent script which is exposed to the LLM.
The intent script returns the scripts response in the speech: text.

intent_script:
    YourIntentScript:
        description: "Intent Script Description"
        action:
          - service: script.yourscript
            data:
              parameter: "{{ parameter }}"
            response_variable: result  
          - stop: ""
            response_variable: result
        speech:
          text: |
            {% if action_response %}
              {{ action_response }}
            {% else %}
              Could not get a response
            {% endif %}

It seems the text does not necessarily have to be a text. In my setting, the LLM picked up a dict returned from the script just fine.

1 Like

fiy I noticed using {{ action_response|tojson }} helps to get better answers from the LLM

Thanks! I’ll give that a try! :raised_hands: