[Blueprint] Frigate Vision - AI-Powered Notifications with LLM Recognition, Cooldowns & Multi-Cam Logic (v0.9) 🚨

Firstly, credit where credit is due! A lot of this automation was inspired and even copied from both @SgtBatten’s Frigate notification Blueprint and @valentinfrlch’s LLMVision Blueprint. If anyone has an issue with content or code, please reach out.

Frigate Vision

After sharing a screenshot of one of my Frigate automations the other day, a few of you asked if I had a blueprint. At the time I didn’t… so I sat down, taught myself how to build one and here it is!

Introducing Frigate Vision; a blueprint designed to bring intelligent notifications and AI object recognition to your Home Assistant setup, powered by Frigate, LLMVision, and Home Assistant.


:page_facing_up: Get the Blueprint:

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.


:bulb: What Frigate Vision Does

  • :rotating_light: Listens for new Frigate detection events from any camera you choose using MQTT
  • :brain: Integrates with LLMVision to enrich notifications with event summaries
  • :clock3: Enforces per-camera cooldowns so you’re not spammed when a squirrel does laps in your yard
  • :iphone: Pushes mobile notifications with custom text, camera names, and optional sublabels (e.g., who or what was recognized)
  • :jigsaw: Uses input helpers so you can easily reuse this blueprint across cameras without editing YAML
  • :control_knobs: Debug mode lets you preview all variables and logic without sending notifications

šŸ› ļø Why I Built It:

I’ve used @SgtBatten’s reworked Frigate notifications for years and finally decided to break it down and recreate it to my liking. It started out as a fun project, but once I discovered LLMVision and the ability to generate dynamic event summaries from clips, I was hooked.

I’ve since spent time crafting what I felt was the ultimate smart notification setup for me. The blueprint includes 3 built-in notification actions and is currently optimized for Android (with iOS support planned). This is still a beta version, but it’s what I’d call mostly complete—and I’m already planning to add more customization options like SgtBatten’s original in future releases.


:gear: Requirements:

  • Frigate installed with MQTT events enabled
  • LLMVision installed and configured
  • Home Assistant mobile app (for push notifications)
  • An input_boolean helper for multi-camera queuing
  • A dashboard to use as a landing page (used for LLMVision event summary widget)

:construction: Current Version: v0.9

It’s working great in my setup, but I’m calling it a ā€œbetaā€ for now until I squash a few quirks and gather feedback.


:telescope: Coming in v1.0:

  • :soap: Cleaner debug logs & error handling
  • :green_apple: iOS support (notification actions + formatting tweaks)
  • :control_knobs: More customizable notification action sets

:brain: TL;DR:

Frigate Vision makes your HA notifications smarter by blending real-time detection with AI object recognition and smart logic; all wrapped in one reusable, configurable blueprint.


I’d love your feedback, ideas, bug reports (via github please), and feature requests. If you use it and like it, drop a comment; let’s make Frigate Vision even smarter!

Cheers,
Zach

P.S The automation and blueprint was all cooked up by me, but I’m lazy and had AI assist with writing this forum post. :slight_smile:

9 Likes

I look forward to testing this out.

1 Like

Definitely no where near as customizable as yours but you basically single-handedly taught me how to build an automation and blueprint in YAML, so thanks!

I’d be honored to get your opinion.

Thank you so much for building this blueprint and sharing it with the community! I have to try this out.
If you have any questions about LLM Vision feel free to reach out!

Much appreciated, at first the only question I had was if I can view the frames that get sent to the LLM, then I was building this and re-read the expose_images option and realized it’s already done!

Oh, actually I do have one, when using the retry time option, how does that work exactly?

i.e
Retries = 4
Time = 60s

Does it:

  1. Try to send 4 times over 4 minutes total
  2. Try to send 4 times over 1 minute (every 15s)

Thanks for making the tools to allow stuff like this!

Do you mean the frigate_retry_attempts and frigate_retry_seconds options? If so, your example would attempt to fetch the video from Frigate every 60 seconds, up to a maximum of 4 attempts.

If you want to take a look at the implementation, the relevant part is here: ha-llmvision/custom_components/llmvision/media_handlers.py at d0adc2a0b2905bd4ea91c33afc423882daa5e5ee Ā· valentinfrlch/ha-llmvision Ā· GitHub

Ah, yeah that’s exactly it. Thanks! I figured that’s how it worked but wasn’t too sure. Much appreciated!

2 Likes

Feel free to share this on the Frigate GitHub as well blakeblackshear/frigate Ā· Discussions Ā· GitHub

Also, could potentially look at linking to this from the docs

1 Like

I’m having an issue using this blueprint, I keep getting the following error, Message malformed: Missing input ios_live_view but I don’t see any input relating to it.

Edit:
I got it to save by toggling ā€œiOS Notificationā€, then I was able to untoggle it once the automation was created.

Downloaded blueprint. Can’t wait to set it up.

Thanks for sharing this blueprint. I’ve been using SgtBatten’s for a long time now but I like the idea of using LLM stuff to describe the scene so I tried this out. A couple of things I noticed right away while attempting implementation.

  1. I tried to use this today and it errored out. The error was because the slugification failed.
  • The error message says this: notify.mobile_app_bobs_pixel_9_pro
  • but my notification entity is this: notify.mobile_app_bob_s_pixel_9_pro
    If you’re wondering that missing underscore is an apostrophe: ā€œBob’sā€ and apparently there is no way to force the raw entity name. Instead, I changed the Companion App name to remove the apostrophe and now it seems to be working.
  1. There are no filters for selecting action within a specific zone or set of zones. Unfortunately, this is a must for me since I have a broad view of lots of activity with many of my cameras and I want to look at specific zone activity in the scene ignoring activity in other zones.

I don’t like having a bunch of per-camera automations so I completely revamped this blueprint to support multiple cameras, include/exclude zones and made lots of other improvements. Under very active development.

I would like to give this a try.
I installed LLMVision and restarted HA, add thru the integration with default and clicked submit. LLMVision download page says the following steps:

  1. Install LLM Vision from HACS
  2. Restart Home Assistant
  3. Search for LLM Vision in Home Assistant Settings/Devices & services
  4. Press submit to continue setup with default settings
  5. Set up the media folder LLM Vision uses the more secure /media folder for storing snapshots. If you’re running Home Assistant Container, you may need to mount a folder to /media in your container settings. See the docs for more details.
  6. Return to the LLM Vision Integration Page
  7. Press ā€˜Add Entry’ to add your first AI Provider

I can’t get past step 5, I am not sure what folder i provide and where, There is the a location says ā€˜Image Path’ and i tried /media, it errors out, i try /media/frigate, still errors out.

I run HA as a VM and my frigate runs on QNAP, i have frigate integration setup in HA and using their standard blueprint until today.

Can you help me in right direction. thanks.

Ive been trying for some time but chatGTP and myself have failed, as I only use the free gemini flash model I run out of credits fast and then the blueprint fails, it would be nice if it could still send the photo with a default title when gemini fails, also, send to telegram instead of the companion app… cheers Terry