AI camera analysis

This blueprint requires Home Assistant 2025.8+ and a default AI task entity configured.

This blueprint will ask AI to analyze a camera snapshot and write that analysis to the camera’s logbook.

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.

This blueprint is meant to showcase the features of the new AI task integration and is kept simple on purpose. If you want extra features, take control of this blueprint inside Home Assistant or make a copy and extend. Don’t forget to share your work!

blueprint:
  name: AI camera analysis
  description: >-
    Analyze camera footage with AI when motion is detected and write it to the logbook.
  domain: automation
  author: Paulus Schoutsen
  input:
    motion_entity:
      name: Motion Sensor
      selector:
        entity:
          filter:
            device_class: motion
            domain: binary_sensor
    camera_target:
      name: Camera
      selector:
        entity:
          domain: camera
    extra_instructions:
      name: Extra Instructions
      description: >
        Additional instructions for the AI to consider when analyzing the camera footage.
        This can be used to specify what to look for in the footage.
      selector:
        text: null
      default: ""
    analysis_delay:
      name: Delay before analysis
      description: Time to wait before analyzing the camera after motion is detected.
      default: 5
      selector:
        number:
          min: 0
          max: 3600
          unit_of_measurement: seconds
    cooldown_time:
      name: Cooldown
      description: Time to wait between analyses.
      default: 60
      selector:
        number:
          min: 0
          max: 3600
          unit_of_measurement: seconds

mode: single
max_exceeded: silent

triggers:
  - trigger: state
    entity_id:
      - !input motion_entity
    from: "off"
    to: "on"

actions:
  - variables:
      camera_entity: !input camera_target
      extra_instructions: !input extra_instructions

  - delay:
      seconds: !input analysis_delay
  - alias: Analyse camera image
    action: ai_task.generate_data
    data:
      task_name: "{{ this.entity_id }}"
      instructions: >
        Give a 1 sentence analysis of what is happening on this camera picture. {{ extra_instructions }}
      structure:
        analysis:
          selector:
            text: null
      attachments:
        media_content_id: "media-source://camera/{{ camera_entity }}"
        media_content_type: ""
    response_variable: result
  - alias: Write analysis to logbook
    action: logbook.log
    data:
      entity_id: "{{ camera_entity }}"
      message: "analysis: {{ result.data.analysis }}"
      domain: ai_task
      name: "{{ states[camera_entity].name }}"
  - delay:
      seconds: !input cooldown_time
6 Likes

this is a good idea , but you need something more that log the result

you need to add notification , or turn on a swith if the result is positive ,

example , if a bird is on frame then switch on/off
or if a person is on frame , then send notification

1 Like

like this a lot, but it needs 2025.8? you have this on beta?

It’s been intentionally kept as simple as possible for now.

nice! is it allowed to modify this blueprint (extend it with more features etc.) and upload it again?

nice base automation BP for demonstration and expansion, glad HA decided to implement this feature.

What happens to the analysed data? Is it possible to display that?

is it possible to feed a url to the attachment parameter for example a frigate image url?

inn this example its writing a new entry to the logbook, you could easily change this to send a notification or similar

You should be able to feed the frigate camera as attachment, and it will make a snapshot

I did an extended version of this blueprint, where you have an input_helper as output-entity:

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.

1 Like

Yes thanks for the reply although I was hoping to use the the frigate recording as a feed rather than taking a snapshot at the time of motion, still a great option regardless

I have been trying to experiment with this but I have 2 problems, even though I believe my AI connections is working well. I am using with ollama, setup as my voice assistant, working fine, and assist, not as far as I can tell.

When trying to run the automation, I get the error: Error: Last content in chat log is not an AssistantContent in the traces.

When trying to rename the automation using suggest with AI, I get the error: Failed to perform the action ai_task/generate_data. Last content in chat log is not an AssistantContent

Under System / General / AI suggestions, I set Ollama AI Task. Is there some other setting needed for this to work?

edit: I see now that which model you use with ollama is critical. I started out using gpt-oss, which worked super as a conversation agent, but not for either AI camera analysis or suggest with AI. Switched to gemma3:12b (running on my M4 Mac Mini) and the automation worked! but not the suggest. now I get the error: Failed to perform the action ai_task/generate_data. Error with Ollama structured response. this is going to take some experimentation with different models I think, but excited about the progress with local AI in HA!

I would love it if the snapshot and AI text can be sent via my Telegram bot. Is this possible?

Any help will be appreciated :smiling_face:

have you tried taking over the blueprint? then look for notification examples. I’m pretty sure I could do iOS app notifications since I’ve done that with other automations, but not sure about telegram.

I tried and it doesn’t look easy lol. I don’t really understand it tbh. I don’t know how to take those templates and jina and variables it gathered from the AI data and turn that into a notification or TTS announcement.

ı could sent the snapshot and AI text with pushover. that works nice.

could also sent the AI text as TTS to my Alexa Echo dot’s.
very nice devellopment

my mistake

Here is my automation that uses LLM and sends the snapshot to Telegram, followed by AI summary and finally sends 10 second video clip all to Telegram.
Maybe some one can use some of the yaml to update the blueprint.

Prerequisites:
LLM integration installed
gotortc Camera integration installed
Telegram integration installed

alias: "AI: Rear Cam Describe motion detected from Rear Garden Camera"
description: ""
triggers:
  - trigger: state
    entity_id:
      - binary_sensor.nvt_cell_motion_detection
    from: "off"
    to: "on"
    for:
      hours: 0
      minutes: 0
      seconds: 2
conditions:
  - condition: time
    after: "06:30:00"
    before: "21:00:00"
    weekday:
      - mon
      - tue
      - wed
      - thu
      - fri
      - sat
      - sun
actions:
  - action: camera.snapshot
    metadata: {}
    data:
      filename: /config/www/cam_snaps/rear_cam_snap.jpg
    target:
      entity_id: camera.192_168_1_189_5
  - delay:
      hours: 0
      minutes: 0
      seconds: 2
      milliseconds: 0
  - action: llmvision.image_analyzer
    metadata: {}
    data:
      remember: false
      include_filename: false
      target_width: 1280
      max_tokens: 100
      expose_images: false
      provider: 01K0VN6V21WKSZJDW3YVQS1P46
      message: >-
        Summarize in two sentences the events based on the image captured. Focus
        only on moving subjects such as people, vehicles, and other active
        elements. Ignore static objects and scenery. Provide a clear and concise
        account of movements and interactions. Do not mention or imply the
        existence of images—present the information as if directly observing the
        events. If no movement is detected, respond with: 'No activity
        observed.'
      image_file: /config/www/cam_snaps/rear_cam_snap.jpg
    response_variable: response
  - action: telegram_bot.send_message
    data:
      config_entry_id: 01K073YQ5DZG22YBY0Q9STMMZT
      message: >-
        {{ response.response_text }} System time now: {{
        now().strftime('%H:%M:%S') }}
      title: Rear Camera Motion Detection
  - action: telegram_bot.send_video
    data:
      url: >-
        http://192.168.1.189:1984/api/stream.mp4?src=ustou_rear_garden&mp4=flac&duration=10&filename=record.mp4
      caption: Motion Detected @ {{ now().strftime('%H:%M:%S') }}
      config_entry_id: 01K073YQ5DZG22YBY0Q9STMMZT
mode: single

I also found these useful blueprints for AI and Camera capture:

https://my.home-assistant.io/redirect/blueprint_import/?blueprint_url=https%3A%2F%2Fraw.githubusercontent.com%2Fvalentinfrlch%2Fha-llmvision%2Frefs%2Fheads%2Fmain%2Fblueprints%2Fevent_summary.yaml
https://my.home-assistant.io/redirect/blueprint_import/?blueprint_url=https%3A%2F%2Fgist.github.com%2FTheRealFalseReality%2Feab315e84f2711783fb7454ac0b42187
https://my.home-assistant.io/redirect/blueprint_import/?blueprint_url=https%3A%2F%2Fgist.github.com%2FTheRealFalseReality%2F471a961df8db47dd6feb9ca32b24b2fa