Run script with LLM and say back the return value

Is it possible to have assist with LLM to run a script and wait of the script to run and say back the returned value?

I have a simple script to flip a coin:

sequence:
  - variables:
      res: |
        {{ { 'value': ['Heads', 'Tails'] | random } }}
  - stop: End
    response_variable: res
alias: Flip a Coin
description: >-
  Flips / Tosses a coin. Helps to make a decision in randomly returning a Heads or
  Tails.
icon: mdi:circle-multiple-outline

This script works if i call it in a different script but when run by Assist it just returns “Started” and nothing else.

I assume this is just a test, correct?
Otherwise, why would you not just use the random function and a send the result via a notify.mobile_app action.

That said, I wouldn’t trust anything like that to GenAI (just yet) because the results often seem to be neither random nor true - see my fun experiment/experience here:
Fun Fact for the Day! - A Cautionary Tale for the use of Gen AI - Configuration - Home Assistant Community

It is a test-script for now but, it is not something that runs as an automation (like in your scenario). This script is exposed to Assist/OpenAI and if I ask the Assist to toss a coin. The LLM runs this script, but it doesn’t wait until the end and returns the result it just says “Started”.

I assume it is a technical issue here not one where the AI just makes stuff up.

1 Like

Theres nuance here.

No. It cannot wait an arbitrary amount of time for a return then do something.

Yes it can run and based. On the information given to it run a tool - your script. Then report the response.

For now anyway. Think

Wake event > system prompt > user prompt (STT data) > llm >

Possible tool call (round trip) >

Tool data > LLM > (TTS) > You

But be very clear it’s not waiting for anything. It’s one cut through the data. The script executes as a result of the llm tool call which passes back to the llm but the llm doesn’t “wait”

So Yes LLMs can use tools and cause things to react. No they don’t wait.

This will likely change as reasoning llms are introduced. But that’s post 2025.3.x and not worth talking about until it’s real.

Read this. Friday's Party: Creating a Private, Agentic AI using Voice Assistant tools

Thank you @NathanCu for the explanation!

1 Like

It works now in the new Version of Home Assistant. The LLM waits till the script ends and responds with the returned value.
:tada:

1 Like