LLM Vision Blueprint 1.4.0: Important Updates & Changes
Blueprint requires LLM Vision 1.4.0.
The LLM Vision Blueprint has received a major update in version 1.4.0 with significant changes and new features. Please review the updates carefully before upgrading.
Breaking Changes
Frigate Mode Removed: The blueprint no longer supports Frigate mode. However, you can still use Frigate by utilizing camera entities and binary sensors from the Frigate integration.
Deprecated Options:
input_mode has been removed
**detail**has been removed. Use target_width to control the resolution of images.
New Features
Run Condition: Set custom conditions to control when the blueprint runs (e.g., only when no one is home or when the alarm is enabled).
Notification Delivery Modes:
Dynamic: Sends an initial live camera preview notification, then silently updates with the summary. (Live Preview not compatible with Android)
Consolidated: Sends a single notification when the summary is ready.
Use Memory Toggle: Enables access to stored information for improved context and automation.
How to Update
Re-import the updated blueprint.
Review and adjust your settings according to the new structure.
Read the full release notes here. Happy automating!
@valentinfrich These are welcome changes. However, I get the following log errors with no notification generated.
Error while executing automation automation.ai_event_summary_llm_vision_v1_3_8: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
7:33:12 AM – (ERROR) Automation
AI Event Summary (LLM Vision v1.3.8): Choose at step 6: choice 1: (Snapshot) Update notification on notify devices: Error executing script. Error for call_service at pos 1: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
7:33:12 AM – (ERROR) Automation - message first occurred at 7:33:12 AM and shows up 3 times
Template variable warning: ‘dict object’ has no attribute ‘title’ when rendering ‘{{response.title}}’
7:33:12 AM – (WARNING) helpers/template.py - message first occurred at 7:33:12 AM and shows up 2 times
@valentinfrlch Yes, I have done that. I have also deleted the original automation and created a new one with the updated blueprint. I get the same errors.
Error while executing automation automation.ai_event_summary_v1_4_0: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
8:14:27 AM – (ERROR) Automation
AI Event Summary (v1.4.0): Choose at step 6: choice 1: (Snapshot) Update notification on notify devices: Error executing script. Error for call_service at pos 1: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
8:14:27 AM – (ERROR) Automation - message first occurred at 8:14:27 AM and shows up 3 times
Template variable warning: ‘dict object’ has no attribute ‘title’ when rendering ‘{{response.title}}’
8:14:27 AM – (WARNING) helpers/template.py - message first occurred at 8:14:27 AM and shows up 2 times
I also had a question regarding the timing and responsiveness of the notification. As an example, if a car drives up my driveway, most of the time the notification I get is “No vehicles or people detected” and the snapshot sent is usually an image after the event has already taken place. I’m wondering if it’s the number of snapshots sent or the recording period which needs to be changed? I currently have it at 3 frames over 3 seconds.
You can now use the image and camera entities exposed by the frigate integration directly. If you use frigate, you’ll also have some binary_sensors for motion and object detection. You can use those as ‘Motion Sensors’. The automation will then be triggered when one of those sensors changes to on. Keep in mind that the order of the cameras and motion sensors must be the same if you have multiple.
You can use any binary sensor (helpers too, in case you want to add some custom logic via templates). Alternatively, the camera state can be used, so the automation is triggered when the camera state changes e.g. to recording.
I’ve not seen anyone else have this issue, but I get an error during the Analyze Event step of the blueprint stating that no image input was provided. I’m not using Frigate but rather what used to be termed “camera mode”. I’m running ollama/llava-phi3:latest in a docker container and can connect to and chat with the model. I’ve a camera entity that is being provided by my Synology integration, and I’m using a motion sensor to trigger. The automation triggers as expected, sends a snapshot notification to my iPhone including the camera snapshot, but then fails as I said on the Analyze Event step.
Here’s my error:
**
Triggered by the state of binary_sensor.garage_motion_sensor_smoke at April 14, 2025 at 9:20:57 PM
Test If multiple conditions match
Choose: No action executed
Choose: No action executed
Choose: Option 1 executed
Analyze event
Stopped because an error was encountered at April 14, 2025 at 9:21:03 PM (runtime: 6.03 seconds)
No image input provided**
As a test I’ve also tried the automation using the Onvif camera entity and had the same result. It appears that the camera stream is not getting passed to the LLM. There are no images in the www\llmvision\ folder either.
Is there more logs I can look at to troubleshoot the problem?
Does anyone know how to create entities to use to trigger when Frigate detects something but only in a specific zone?
I have 3 cameras and they all have a zone called Alert. I only want the notification when the object is detected in the zone “alert”.
Right now I have binary_sensor.alert_all_occupancy but that triggers for all 3 cameras. The individual cameras have binary_sensor.frontdoor_all_occupancy but that triggers regardless of the zone.
I’ve tried a few things suggested by chatgpt and deepseek and I managed to do it via input_boolean but the blueprint only supports binary sensors.
Thank you!
Edit: I think I got it
mqtt:
binary_sensor:
- name: "Frontdoor Alert Zone Triggered"
state_topic: "frigate/events"
value_template: >
{% if value_json.after is defined and value_json.after.camera == "frontdoor" and 'alert' in value_json.after.entered_zones %}
true
{% else %}
false
{% endif %}
payload_on: "true"
payload_off: "false"
device_class: motion
off_delay: 30
unique_id: "frontdoor_alert_zone_triggered_motion"
object_id: "frontdoor_alert_zone_triggered"
Hey all, this blueprint sounds so cool. I’m quite a newbie so struggling a bit. I have been sitting 3 days on chatGPT trying to get my Frigate to connect to Mosquitto MQTT with no luck.
I can post a topic and reveive the messages on MQTT, but nowhere in the frigate startup log does it even try to connect to MQTT. Also no devices or entities show up under the MQTT integration. I’m sure it is something small, but I cannot get around this issue.
I want to use frigate as my camera status do not change on motion so need to trigger it the Frigate way. Also I’m sure once frigate works I will see those options in the blueprint? to select between camera mode and frigate mode? as I cannot see that currently.