LLM Vision error defining response variable

Hi everyone,

I am trying to send a messages to my phone using the response variable according to a guide online. When I run the automation, I get an error saying my response variable is undefined:
“Error rendering message: UndefinedError: ‘response’ is undefined”

Did I miss something when setting up the automation?

Please paste the entire YAML for the automation, correctly formatted. Note that when you manually run an automation, only the actions section is run. If your service call to populate the response is not in the actions section, that bit won’t be run and response will be undefined.

This is my current yaml code:

alias: LLM test
description: ""
trigger: []
condition: []
action:
  - sequence:
      - action: llmvision.stream_analyzer
        metadata: {}
        data:
          duration: 5
          max_frames: 3
          include_filename: false
          target_width: 1280
          detail: low
          max_tokens: 100
          temperature: 0.2
          expose_images: false
          provider: 01JE9K600K742HDJAGQYTAFCHC
          message: >-
            Beschrijf in het Nederlands wat er zichtbaar is. Zit de hond in de
            bench? beschrijf de zichtbare personen
          image_entity:
            - camera.kodycam_lsc_indoor_ptz_dual_band
        response_variable: response
      - device_id: 1358451a239a4660345b07296846300b
        domain: mobile_app
        type: notify
        message: "{{ response.response_txt }}"
mode: single

Running the entire section does not give the undefined error, but the notification is still empty.

The documentation suggests that your message should refer to:

{{ response.response_text }}
----------------------^

With no trigger, this should be defined as a script rather than an automation.

1 Like

Thank you for helping. It was indeed a simple typo and it works as expected now!

1 Like

Post code as properly-formatted text not screenshots. See rule 11 of this:

action: llmvision.image_analyzer
metadata: {}
data:
  remember: false
  include_filename: false
  target_width: 1280
  max_tokens: 100
  temperature: 0.2
  provider: ***********************
  model: gemini-1.5-flash
  message: >-
    Beschreibe das Bild in einem Satz. Wenn du Personen siehst, beschreibe deren
    aussehnen.
  image_file: /media/local/tuerKG.jpg
  generate_title: false
  expose_images: false
  expose_images_persist: false
response_variable: response

Can anyboady help me in order to correct the code to work ?
Br.

action: notify.gary_steffi_notification
data:
  message: "{{ response.response_text }}"
  title: Türglocke
  data:
    image: /media/local/local/tuerKG.jpg
    actions:
      - action: URI
        title: Livestream
        uri: /lovelace-glocke/0

Bildschirmfoto 2025-01-25 um 17.05.04

You are missing a " at the end of the message.

message: "{{ response.response_text ))"

Thank you.
Br.

Show all the code — we can’t see where you are populating the response variable.

Did it from this tutorial : https://www.reddit.com/r/homeassistant/comments/1h6ihsh/how_to_setup_llm_vision_for_analyzing_security/

Please try to combine your posts together…

We’re going to need to see an automation trace, as you say “the llm vision part is working and schould give a feedback ----> response_variable: response” but it clearly isn’t, from the error message.

Find a recent trace that generated the error (Automations screen, three dots on its row, Traces), then on the Trace screen click the three dots and select Download Trace. Then paste all of that in here, formatted as code not a screenshot.