Is it possible to have assist with LLM to run a script and wait of the script to run and say back the returned value?
I have a simple script to flip a coin:
sequence:
- variables:
res: |
{{ { 'value': ['Heads', 'Tails'] | random } }}
- stop: End
response_variable: res
alias: Flip a Coin
description: >-
Flips / Tosses a coin. Helps to make a decision in randomly returning a Heads or
Tails.
icon: mdi:circle-multiple-outline
This script works if i call it in a different script but when run by Assist it just returns “Started” and nothing else.
It is a test-script for now but, it is not something that runs as an automation (like in your scenario). This script is exposed to Assist/OpenAI and if I ask the Assist to toss a coin. The LLM runs this script, but it doesn’t wait until the end and returns the result it just says “Started”.
I assume it is a technical issue here not one where the AI just makes stuff up.
No. It cannot wait an arbitrary amount of time for a return then do something.
Yes it can run and based. On the information given to it run a tool - your script. Then report the response.
For now anyway. Think
Wake event > system prompt > user prompt (STT data) > llm >
Possible tool call (round trip) >
Tool data > LLM > (TTS) > You
But be very clear it’s not waiting for anything. It’s one cut through the data. The script executes as a result of the llm tool call which passes back to the llm but the llm doesn’t “wait”
So Yes LLMs can use tools and cause things to react. No they don’t wait.
This will likely change as reasoning llms are introduced. But that’s post 2025.3.x and not worth talking about until it’s real.