This blueprint will ask AI to analyze a camera snapshot and write that analysis to the camera’s logbook.
This blueprint is meant to showcase the features of the new AI task integration and is kept simple on purpose. If you want extra features, take control of this blueprint inside Home Assistant or make a copy and extend. Don’t forget to share your work!
blueprint:
name: AI camera analysis
description: >-
Analyze camera footage with AI when motion is detected and write it to the logbook.
domain: automation
author: Paulus Schoutsen
input:
motion_entity:
name: Motion Sensor
selector:
entity:
filter:
device_class: motion
domain: binary_sensor
camera_target:
name: Camera
selector:
entity:
domain: camera
extra_instructions:
name: Extra Instructions
description: >
Additional instructions for the AI to consider when analyzing the camera footage.
This can be used to specify what to look for in the footage.
selector:
text: null
default: ""
analysis_delay:
name: Delay before analysis
description: Time to wait before analyzing the camera after motion is detected.
default: 5
selector:
number:
min: 0
max: 3600
unit_of_measurement: seconds
cooldown_time:
name: Cooldown
description: Time to wait between analyses.
default: 60
selector:
number:
min: 0
max: 3600
unit_of_measurement: seconds
mode: single
max_exceeded: silent
triggers:
- trigger: state
entity_id:
- !input motion_entity
from: "off"
to: "on"
actions:
- variables:
camera_entity: !input camera_target
extra_instructions: !input extra_instructions
- delay:
seconds: !input analysis_delay
- alias: Analyse camera image
action: ai_task.generate_data
data:
task_name: "{{ this.entity_id }}"
instructions: >
Give a 1 sentence analysis of what is happening on this camera picture. {{ extra_instructions }}
structure:
analysis:
selector:
text: null
attachments:
media_content_id: "media-source://camera/{{ camera_entity }}"
media_content_type: ""
response_variable: result
- alias: Write analysis to logbook
action: logbook.log
data:
entity_id: "{{ camera_entity }}"
message: "analysis: {{ result.data.analysis }}"
domain: ai_task
name: "{{ states[camera_entity].name }}"
- delay:
seconds: !input cooldown_time
Yes thanks for the reply although I was hoping to use the the frigate recording as a feed rather than taking a snapshot at the time of motion, still a great option regardless
I have been trying to experiment with this but I have 2 problems, even though I believe my AI connections is working well. I am using with ollama, setup as my voice assistant, working fine, and assist, not as far as I can tell.
When trying to run the automation, I get the error: Error: Last content in chat log is not an AssistantContent in the traces.
When trying to rename the automation using suggest with AI, I get the error: Failed to perform the action ai_task/generate_data. Last content in chat log is not an AssistantContent
Under System / General / AI suggestions, I set Ollama AI Task. Is there some other setting needed for this to work?
edit: I see now that which model you use with ollama is critical. I started out using gpt-oss, which worked super as a conversation agent, but not for either AI camera analysis or suggest with AI. Switched to gemma3:12b (running on my M4 Mac Mini) and the automation worked! but not the suggest. now I get the error: Failed to perform the action ai_task/generate_data. Error with Ollama structured response. this is going to take some experimentation with different models I think, but excited about the progress with local AI in HA!
have you tried taking over the blueprint? then look for notification examples. I’m pretty sure I could do iOS app notifications since I’ve done that with other automations, but not sure about telegram.
I tried and it doesn’t look easy lol. I don’t really understand it tbh. I don’t know how to take those templates and jina and variables it gathered from the AI data and turn that into a notification or TTS announcement.
Here is my automation that uses LLM and sends the snapshot to Telegram, followed by AI summary and finally sends 10 second video clip all to Telegram.
Maybe some one can use some of the yaml to update the blueprint.
alias: "AI: Rear Cam Describe motion detected from Rear Garden Camera"
description: ""
triggers:
- trigger: state
entity_id:
- binary_sensor.nvt_cell_motion_detection
from: "off"
to: "on"
for:
hours: 0
minutes: 0
seconds: 2
conditions:
- condition: time
after: "06:30:00"
before: "21:00:00"
weekday:
- mon
- tue
- wed
- thu
- fri
- sat
- sun
actions:
- action: camera.snapshot
metadata: {}
data:
filename: /config/www/cam_snaps/rear_cam_snap.jpg
target:
entity_id: camera.192_168_1_189_5
- delay:
hours: 0
minutes: 0
seconds: 2
milliseconds: 0
- action: llmvision.image_analyzer
metadata: {}
data:
remember: false
include_filename: false
target_width: 1280
max_tokens: 100
expose_images: false
provider: 01K0VN6V21WKSZJDW3YVQS1P46
message: >-
Summarize in two sentences the events based on the image captured. Focus
only on moving subjects such as people, vehicles, and other active
elements. Ignore static objects and scenery. Provide a clear and concise
account of movements and interactions. Do not mention or imply the
existence of images—present the information as if directly observing the
events. If no movement is detected, respond with: 'No activity
observed.'
image_file: /config/www/cam_snaps/rear_cam_snap.jpg
response_variable: response
- action: telegram_bot.send_message
data:
config_entry_id: 01K073YQ5DZG22YBY0Q9STMMZT
message: >-
{{ response.response_text }} System time now: {{
now().strftime('%H:%M:%S') }}
title: Rear Camera Motion Detection
- action: telegram_bot.send_video
data:
url: >-
http://192.168.1.189:1984/api/stream.mp4?src=ustou_rear_garden&mp4=flac&duration=10&filename=record.mp4
caption: Motion Detected @ {{ now().strftime('%H:%M:%S') }}
config_entry_id: 01K073YQ5DZG22YBY0Q9STMMZT
mode: single