Passing variables or response data to Conversation Process LLM

I’ve been struggling for a couple of weeks in getting what I feel should be pretty simple to work properly. I am trying to create tools for LLM based assist agents using scripts. I understand how to create and expose scripts, and I understand how to use fields to give interactability to the assist agent. What I am trying to figure out is how to give the LLM assist agent the ability to use scripts to fetch and read external information.

For example I am using the LLM Vision integration to analyze weather radar animations and radar simulation model guidance. Ideally I want to take a response from this integration, and pass it to a current conversation process I have going with ChatGPT based assist agent.

I want to be able to ask my assistant things like:

What does the current weather radar look like?
Will it rain later today?

My latest attempt was to try and push the LLM Vision response to set_conversation_response action which I hoped would expose the response to the currently invoked assist conversation agent, but this does not seem to be working.

Here is the exposed script YAML and also latest trace. I can see the LLM agent is calling the script when prompted, it’s just not getting any feedback from the LLM Vision response. Thanks in advance for anyone who has any ideas on how to achieve this!

alias: Analyze NAM-HIRES Weather Radar Simulation Model Guidance
sequence:
  - action: llmvision.image_analyzer
    metadata: {}
    data:
      provider: OpenAI
      model: gpt-4o
      include_filename: false
      detail: high
      max_tokens: 300
      temperature: 0.6
      image_file: |-
        /config/www/weather_radar/nam/2024090706-nam-003.gif
        /config/www/weather_radar/nam/2024090706-nam-006.gif
        /config/www/weather_radar/nam/2024090706-nam-009.gif
        /config/www/weather_radar/nam/2024090706-nam-012.gif
        /config/www/weather_radar/nam/2024090706-nam-015.gif
        /config/www/weather_radar/nam/2024090706-nam-018.gif
        /config/www/weather_radar/nam/2024090706-nam-021.gif
      message: >-
        The sequence of images attached is the current NAM-HIRES precipitation
        simulation model guidance for North East US.

        You should see EDT time in the top right corner, that is our time zone.
        The current date and time is {{ now().strftime('%B %-d, %Y, %-I:%M %p') }}


        Each consecutive image represents a specific hour and the projected
        preciptation across the country. We are located in central New Jersey so
        focus your response to this region.


        Can you tell me if I should expect any precipitation in the near future?
        Do not describe each image, consider the image frames of an animation or
        video sequence forecasting the next several hours. Concisely summarize
        the conditions I should be expecting.
    response_variable: nam_hires
    alias: >-
      Analyze radar simulation sequence of images using LLM Vision Image
      Analyzer and the latest OpenAI ChatGPT multi-modal model.
  - set_conversation_response: "{{nam_hires.response_text}}"
    alias: Precipitation forecast
description: >-
  Analyze latest NAM model guidance animation and report back projected
  precipitation to be expected over the next several hours. This is good for
  answering questions about near-term weather and precipitation expectations.

I had the same problem. it seems as of now script return variables are not available to the LLM.

a workaround that works for me is to add an intent script, which calls the script and passes the script return value into the speech data.

you can find it here Exposing HA Scripts to Assist API: Questions on Script Results Access by LLMs - #3 by super-qua

Maybe the ‘remember’ feature added in v1.3 helps. It exposes responses as calendar events.