Camera, Frigate: Intelligent AI-powered notifications

LLM Vision Blueprint 1.4.0: Important Updates & Changes

Blueprint requires LLM Vision 1.4.0.

The LLM Vision Blueprint has received a major update in version 1.4.0 with significant changes and new features. Please review the updates carefully before upgrading.

:rotating_light: Breaking Changes

  • Frigate Mode Removed: The blueprint no longer supports Frigate mode. However, you can still use Frigate by utilizing camera entities and binary sensors from the Frigate integration.
  • Deprecated Options:
    • input_mode has been removed
    • **detail**has been removed. Use target_width to control the resolution of images.

:sparkles: New Features

  • Run Condition: Set custom conditions to control when the blueprint runs (e.g., only when no one is home or when the alarm is enabled).
  • Notification Delivery Modes:
    • Dynamic: Sends an initial live camera preview notification, then silently updates with the summary. (Live Preview not compatible with Android)
    • Consolidated: Sends a single notification when the summary is ready.
  • Use Memory Toggle: Enables access to stored information for improved context and automation.

:pushpin: How to Update

  • Re-import the updated blueprint.
  • Review and adjust your settings according to the new structure.

Read the full release notes here. Happy automating!

1 Like

@valentinfrich These are welcome changes. However, I get the following log errors with no notification generated.
Error while executing automation automation.ai_event_summary_llm_vision_v1_3_8: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
7:33:12 AM – (ERROR) Automation
AI Event Summary (LLM Vision v1.3.8): Choose at step 6: choice 1: (Snapshot) Update notification on notify devices: Error executing script. Error for call_service at pos 1: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
7:33:12 AM – (ERROR) Automation - message first occurred at 7:33:12 AM and shows up 3 times
Template variable warning: ‘dict object’ has no attribute ‘title’ when rendering ‘{{response.title}}’
7:33:12 AM – (WARNING) helpers/template.py - message first occurred at 7:33:12 AM and shows up 2 times

You need to update the blueprint to v1.4.0 as well.

@valentinfrlch Yes, I have done that. I have also deleted the original automation and created a new one with the updated blueprint. I get the same errors.
Error while executing automation automation.ai_event_summary_v1_4_0: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
8:14:27 AM – (ERROR) Automation
AI Event Summary (v1.4.0): Choose at step 6: choice 1: (Snapshot) Update notification on notify devices: Error executing script. Error for call_service at pos 1: Error rendering data template: UndefinedError: ‘dict object’ has no attribute ‘key_frame’
8:14:27 AM – (ERROR) Automation - message first occurred at 8:14:27 AM and shows up 3 times
Template variable warning: ‘dict object’ has no attribute ‘title’ when rendering ‘{{response.title}}’
8:14:27 AM – (WARNING) helpers/template.py - message first occurred at 8:14:27 AM and shows up 2 times

Did you upgrade the integration to 1.4.0 as well? The blueprint relies on the updated integration.

I find only v. 1.3.9. Have searched HACS; find only 1.3.9. I guess it’s still pending an update.

EDIT: Update finally appeared!

Love this Blueprint! Just wondering what the best way to go about integrating Double Take into these notifications might be?

Also was wondering if “Clip” could be added to preview mode to watch the full clip as opposed to only snapshot or live view?

No i ended up switching to and learning to like Frigate.

I also had a question regarding the timing and responsiveness of the notification. As an example, if a car drives up my driveway, most of the time the notification I get is “No vehicles or people detected” and the snapshot sent is usually an image after the event has already taken place. I’m wondering if it’s the number of snapshots sent or the recording period which needs to be changed? I currently have it at 3 frames over 3 seconds.

This might be a stupid question but how do you select frigate mode or camera?

That is from an old version and no longer exists.

You can now use the image and camera entities exposed by the frigate integration directly. If you use frigate, you’ll also have some binary_sensors for motion and object detection. You can use those as ‘Motion Sensors’. The automation will then be triggered when one of those sensors changes to on. Keep in mind that the order of the cameras and motion sensors must be the same if you have multiple.

I did think that.
Thanks!!

Is there an alterative to these sensors? They don’t actually work that well. I get a lot more false positives using these sensors as per the comment here [Config Support]: Frigate to Home Assistant Zone Occupancy Issues · blakeblackshear/frigate · Discussion #16697 · GitHub

You can use any binary sensor (helpers too, in case you want to add some custom logic via templates). Alternatively, the camera state can be used, so the automation is triggered when the camera state changes e.g. to recording.

I’ve not seen anyone else have this issue, but I get an error during the Analyze Event step of the blueprint stating that no image input was provided. I’m not using Frigate but rather what used to be termed “camera mode”. I’m running ollama/llava-phi3:latest in a docker container and can connect to and chat with the model. I’ve a camera entity that is being provided by my Synology integration, and I’m using a motion sensor to trigger. The automation triggers as expected, sends a snapshot notification to my iPhone including the camera snapshot, but then fails as I said on the Analyze Event step.

Here’s my error:

**
Triggered by the state of binary_sensor.garage_motion_sensor_smoke at April 14, 2025 at 9:20:57 PM
Test If multiple conditions match
Choose: No action executed
Choose: No action executed
Choose: Option 1 executed
Analyze event
Stopped because an error was encountered at April 14, 2025 at 9:21:03 PM (runtime: 6.03 seconds)
No image input provided**

My config:

id: '1744066323766'
alias: Garage Motion
description: ''
use_blueprint:
  path: valentinfrlch/event_summary.yaml
  input:
    notify_device:
      - 5880c43e418e52e36dd53dbf43e3c34d
    camera_entities:
      - camera.garage_camera_camera
    motion_sensors:
      - binary_sensor.garage_motion_sensor_smoke
    provider: 01JR9713QTK6W5BMM2X24YXV46
    model: llava-phi3
    preview_mode: Snapshot
    remember: true
    trigger_state: recording

As a test I’ve also tried the automation using the Onvif camera entity and had the same result. It appears that the camera stream is not getting passed to the LLM. There are no images in the www\llmvision\ folder either.

Is there more logs I can look at to troubleshoot the problem?

Does anyone know how to create entities to use to trigger when Frigate detects something but only in a specific zone?

I have 3 cameras and they all have a zone called Alert. I only want the notification when the object is detected in the zone “alert”.

Right now I have binary_sensor.alert_all_occupancy but that triggers for all 3 cameras. The individual cameras have binary_sensor.frontdoor_all_occupancy but that triggers regardless of the zone.

I’ve tried a few things suggested by chatgpt and deepseek and I managed to do it via input_boolean but the blueprint only supports binary sensors.

Thank you!

Edit: I think I got it

mqtt:
  binary_sensor:
    - name: "Frontdoor Alert Zone Triggered"
      state_topic: "frigate/events"
      value_template: >
        {% if value_json.after is defined and value_json.after.camera == "frontdoor" and 'alert' in value_json.after.entered_zones %}
          true
        {% else %}
          false
        {% endif %}
      payload_on: "true"
      payload_off: "false"
      device_class: motion
      off_delay: 30
      unique_id: "frontdoor_alert_zone_triggered_motion"
      object_id: "frontdoor_alert_zone_triggered"

my messages are cut of after about 80 chars. Any way to change the length of the message?
Using model: gemini-2.0-flash-lite

Hey all, this blueprint sounds so cool. I’m quite a newbie so struggling a bit. I have been sitting 3 days on chatGPT trying to get my Frigate to connect to Mosquitto MQTT with no luck.

I can post a topic and reveive the messages on MQTT, but nowhere in the frigate startup log does it even try to connect to MQTT. Also no devices or entities show up under the MQTT integration. I’m sure it is something small, but I cannot get around this issue.

I want to use frigate as my camera status do not change on motion so need to trigger it the Frigate way. Also I’m sure once frigate works I will see those options in the blueprint? to select between camera mode and frigate mode? as I cannot see that currently.

Any assistance will be appreciated

You need the Home Assistant Frigate Integration: GitHub - blakeblackshear/frigate-hass-integration: Frigate integration for Home Assistant. This should expose your cameras as camera entities in Home Assistant as well as the sensors for the zones you’ve set up in Frigate.

@tkoeberl You can increase max_tokens.

I do have the integration and I can see my camera on the Frigate Web ui.

When I start Frigate it doest connect to MQTT.

MQTT is working as I can listen to a custom topic, publish to it and receive it.

I just can’t seem to get Frigate to connect to MQTT