LLM Vision: Let Home Assistant see!

Sounds like a good idea! Please create a feature request here: Sign in to GitHub · GitHub

1 Like

So I’m just trying the new version with the openAI API (gpt4o mini) but it keeps breaking with an error at the “analyze event” step. It’s definitely sending info to openAI as it used 400k tokens, but the traces in HA show the following error:

Analyze event

Executed: 7 July 2025 at 17:57:39
Error: ‘NoneType’ object has no attribute ‘attributes’
Result:
params:
domain: llmvision
service: stream_analyzer
service_data:
image_entity:
- ‘’
duration: 10
provider: ----REDACTED - NOT SURE IF IT MATTERS!—
model: gpt-4o-mini
message: >-
Summarize the events based on a series of images captured at short
intervals. Focus only on moving subjects such as people, vehicles, and
other active elements. Ignore static objects and scenery. Provide a clear
and concise account of movements and interactions. Do not mention or imply
the existence of images—present the information as if directly observing
the events. If no movement is detected, respond with: ‘No activity
observed.’
use_memory: false
remember: true
expose_images: true
generate_title: true
include_filename: true
max_frames: 6
target_width: 1280
max_tokens: 209
target: {}
running_script: false

This is amazing work!!

Just added a feature request for a use case broadening the impact of your effort:

Hi. I would love to see the possibility to add a delay before the LLM do its analyzing. This because I have Eufy cameras, and have to use the event pics instead - and it takes a couple of seconds for it to be ready. So now I often get info about the last event, not the present one.