Sounds like a good idea! Please create a feature request here: Sign in to GitHub · GitHub
So I’m just trying the new version with the openAI API (gpt4o mini) but it keeps breaking with an error at the “analyze event” step. It’s definitely sending info to openAI as it used 400k tokens, but the traces in HA show the following error:
Analyze event
Executed: 7 July 2025 at 17:57:39
Error: ‘NoneType’ object has no attribute ‘attributes’
Result:
params:
domain: llmvision
service: stream_analyzer
service_data:
image_entity:
- ‘’
duration: 10
provider: ----REDACTED - NOT SURE IF IT MATTERS!—
model: gpt-4o-mini
message: >-
Summarize the events based on a series of images captured at short
intervals. Focus only on moving subjects such as people, vehicles, and
other active elements. Ignore static objects and scenery. Provide a clear
and concise account of movements and interactions. Do not mention or imply
the existence of images—present the information as if directly observing
the events. If no movement is detected, respond with: ‘No activity
observed.’
use_memory: false
remember: true
expose_images: true
generate_title: true
include_filename: true
max_frames: 6
target_width: 1280
max_tokens: 209
target: {}
running_script: false
This is amazing work!!
Just added a feature request for a use case broadening the impact of your effort:
Hi. I would love to see the possibility to add a delay before the LLM do its analyzing. This because I have Eufy cameras, and have to use the event pics instead - and it takes a couple of seconds for it to be ready. So now I often get info about the last event, not the present one.
In the release notes you suggest to delete the timeline provider.
How can we setup the Timeline Card if we have deleted the Timeline provider
I have a question about the “Additional Actions” section of the blueprint - how can I pass the camera name and AI response text to any of the actions? I tried referencing {{ response.response_text}} but it doesn’t seem to be available ![]()
Hi all
I’m trying to use openrouter.ai as “custom openai” provider but it’s not working for me
I’m using https://openrouter.ai/api/v1/chat/completions as endpoint and openai/gpt-4o or openai/gpt-4.1 as model but with no success
Anyone succeded using it?
Thanks
Hi all,
I’m wondering whether a regular motion detector can be registered as a motion sensor, or if it always has to be the motion sensor of the camera.
It takes any binary sensor. I use unifi’s person detection in addition to the record event
It’s been a long time since I updated and I noticed in LLN vision that the images are no longer renamed to the camera name, this is causing me problems sending the screenshots to my esp32s3 devise, do you have a solution?
Thanks for the update!
One question: how can I add/configure this preview card? I can only add the LLM Vision card to my dashboard:
Using Frigate with HA with Frigate set to record 24/7 (Frigate camera sensors are always in state “recording”).
Using LLM Vision 1.5.0 blueprint, because trigger state is always “recording” the LLM Vision blueprint runs constantly.
Is it normal for LLM Vision to run constantly for a Frigate system that records 24/7 or is there a better way to configure it? (maybe the automation runs on a “person occupancy” event, and the motion trigger then also fires an event on person occupancy, just so the automation doesnt constantly run all the time.
Maybe there’s a better solution but thought i’d ask here for any ideas?
Thanks.
Hi, just wondering what other users experiences have been with cloud providers? I used to use gemini (paid, still have a bunch of credits) and IME, I get 98.5% 429 quota exceeded error msg and 1.5% actual generated responses.
Just wondering if other users experience the same thing or has it gotten better for everyone? Should I just suck it up and use OpenAI, even though I do not agree with their business model of stealing pre-paid credits? Or should I just buy some GPUs that can handle something local (not ideal, as the GPUs would literally only be for LLMs).
Are users getting good use out of their LLM backed CCTV deployments? Like, has the generated response ever been useful and practical for you? Or just cool to have? Genuinely curious!
Man I just came here to see if anyone was having this issue and you got here a few minutes before me! Also curious about this…
I was just using the Gemini free tier 1.5 up until about a week ago when I noticed my LLMV automations stopped working. Turns out 1.5 was deprecated and no longer working. Ok. So I switched it in the integration and updated my automations to 2.0 and everything worked fine again. But I noticed yesterday and today it stopped working later on in the day. Check the logs, it’s an error of exceeding quota. Not sure what the deal is, I didn’t add anything else, token limit stayed the same, but it’s apparently using up a lot more for some reason. Hitting the max 200 requests per day with 2.0 when I never had it happen once on 1.5.
As for your last question though… It’s both fun and practical. I have a camera outside my kid’s bedroom so I get a notification if she gets out of bed. Another for the front door let’s me know who’s there, delivery, etc. I was originally looking at some more expensive services, but this does exactly what I need it to do.
Just FYI the 429 quota exceeded is a red herring. Google will return that error for literally anything, leaving the end user no way to debug the issue. As shown here, you’re thinking you’ve maxed your 200 free calls, when that probably is not the case.
Google has the free tier, then the paid credits tier and then they have some other tier for businesses, cant recall the name, but basically, google doesnt give AF about the free or pre paid credits tier, we’re a drop in the bucket compared to thousands of corporations with unlimited money. They give them the resources and the rest trickles down and to top it off, they arent even decent enough to give us an error message with substance.
Edit: Thanks for confirming what I figured was true though, not worth my time.
Yea just not sure what’s going on and why I’m hitting that 200 daily cap when I never did before. I don’t know why I was never hitting the cap, didn’t change token max, and I’m now going over.
Did you read the reply? More than likely, you are not hitting your maximum free calls.
What I would do, is set up a counter that increments every time you call gemini API (and have an automation that resets it every night). So you can have some more info to debug what’s going on.
On another note, are you saying for those 200 calls, you received generated responses and only started seeing 429 quota exceeded when your actual quota was exceeded? Because that changes things drastically for me.
I dont have an example of those counters because I deleted them after trying to debug googles BS. I do remember that I had daily and weekly counters and added helpers to store debug data to try and piece it together.
I never onece hit 200 calls in a 24 hour period, and Im a paid client. So the quota error wasnt true (for me, at that time: Nov 2024-ish) and other paid users were seeing the same thing.
Actually. How many cameras do you have? and when does your quota usually max out during the day? If it’s early in the day and you dont have many cameras, I would say you may need to fine tune your motion and areas where you want to be notified about objects.
On a free tier, you’re going to want to minimize your API calls (only send requests when you are sure this is an object, in an area, that you want a generated response about). Have you dialled in your zones and what you want to track?

