Generative AI camera snapshot notification

I also needed to take control so I can use my camera binary sensor for person detection.

To solve the API limit, I guess there is a even simpler method:
Configure the automation with:

mode: single
max_exceeded: silent

and add a delay to the action sequence:

delay: 60

Here is the edited automation, be aware, I added some instructions to be mean :smiley:, also, I’m sending the notification using Discord, so I can see later the images, as HA iOS notifications are ephemeral, and finally I modified the template to ignore no obvious motion as it wasn’t working too good for me.

alias: Camera Snapshot, AI & Notification on Motion
description: ""
triggers:
  - entity_id:
      - binary_sensor.g4_doorbell_person_detected
    to: "on"
    trigger: state
actions:
  - if:
      - condition: []
    then:
      - repeat:
          count: "{{ num_snapshots }}"
          sequence:
            - target:
                entity_id: "{{ camera_device }}"
              data:
                filename: >-
                  ./www/snapshots/{{ camera_path }}_snapshot{{ repeat.index
                  }}.jpg
              action: camera.snapshot
            - delay:
                milliseconds: 700
      - data:
          prompt: "{{ ai_prompt }}"
          image_filename: >
            {% set snap_count = num_snapshots %} {% set ns =
            namespace(images=[]) %} {% for i in range(1, snap_count + 1) %}
              {% set image = "./www/snapshots/" ~ camera_path ~ "_snapshot" ~ i ~ ".jpg" %}
              {% set ns.images = ns.images + [image] %}
            {% endfor %} {{ ns.images }}
        response_variable: generated_content
        action: google_generative_ai_conversation.generate_content
      - choose:
          - conditions:
              - condition: template
                value_template: >-
                  {{ 'no obvious motion observed' not in
                  generated_content['text'] | lower }}
            sequence:
              - action: notify.discord_doorbell_channel
                data:
                  message: ""
                  data:
                    embed:
                      title: "{{ motion_name }}"
                      description: "{{ generated_content['text'] }}"
                    images:
                      - >-
                        /config/www/snapshots/g4_doorbell_high_resolution_channel_snapshot2.jpg
              - delay: 60
variables:
  camera_device: camera.g4_doorbell_high
  camera_name: "{{ state_attr(camera_device, 'friendly_name') }}"
  camera_path: "{{ state_attr(camera_device, 'friendly_name') | lower | replace(' ', '_') }}"
  motion_sensor: binary_sensor.g4_doorbell_person_detected
  motion_name: "{{ state_attr(motion_sensor, 'friendly_name') }}"
  is_ios: true
  num_snapshots: 3
  snapshot_access_file_path: /local/snapshots/{{ camera_path }}_snapshot1.jpg
  ai_prompt: >-
    Motion has been detected, compare and very briefly describe what you see in
    the following sequence of images from my {{ camera_name }} camera, be very
    mean and roast. The answer should in english then followed by the spanish
    translation.  Do not describe stationary objects or buildings. If you see no
    obvious causes of motion, reply with "Camera has detected motion however no
    obvious motion observed comparing snapshots".
mode: single
max_exceeded: silent
2 Likes

Hi All

I have this working, however i would like the words that are sent through on the notification to be also read out on my google speaker. Is there a simple enough way to do this please?

thanks

1 Like

You’ll need to add a tts action service call with the generated text in the previous action, something like this in the sequence: resource:

action: tts.speak
data:
  cache: true
  message: "{{ generated_content['text'] }}"
  media_player_entity_id: media_player.kids_bedroom
target:
  entity_id: tts.openai_tts_echo

For this ^, you’ll also need to take control of the blueprint so you can customize it.

2 Likes

This works great, however it always runs from the previous images as there is a slight delay of it processing the image files from the Ring doorbell.

Ive tried adding a delay before the sequence to process the images, but that doesnt help.

Is there anything else i can try?

Thank you for sharing this, with a bit of tweaking it will be great!

works perfectly thank you

1 Like

Anyone got this working with Google Nest cams.

It appears the snapshot is blank due to the nest cams using WebRTC?: Snapshots from a Nest camera so I can do object detection / camera_view has to be set to live to see on dashboard

I like the blueprint :+1: but had the same issue as a few people above, that nobody was ever in the picture - plus I had a lot of false positives due to the movement of trees, bushes, etc.

I resolved it by triggering off object detection in my Frigate installation:
sensor.whatevercam_all_count

Works great for me now and shows the person in the image that’s sent.

Thanks again for making this so easy!

1 Like

I scrolled through this sizable post and apologies if this was already asked, how do I add a specific notification channel to the automation created from this blueprint?

In my other notifications of this severity, I would use this data value:

data:
        channel: Low

I’d try to open the blueprint yaml in

\blueprints\automation\mcinnes01

and add the channel in the previous to last line so that the last lines look like this:

        data: '{% set android_data = {"image": snapshot_access_file_path} %} {% set
          ios_data = {"attachment": {"url": snapshot_access_file_path, "content_type":
          "JPEG"}} %} {{ ios_data if is_ios else android_data }}

          '
          data:
            channel: Low
mode: single

Obviously NOT tested!

1 Like

When I try to edit the Automation in yaml or the blueprint in using the home assistant app, I do not see anything like the code snippet you provided.

UPDATE:
My bad, instead I tried using file editor add-on and browse to the location of the blueprint yaml file. I see what you’re saying now.

I took the first 5 and the last line of my example straight out of the - only - file called ai_camera_motion_notification.yaml from the folder called \blueprints\automation\mcinnes01.

No idea why yours would look different - unless you downloaded a previous version of the blueprint that didn’t have these lines (yet), which is hard to imagine because it holds the info for the notification that’s sent to your mobile.

1 Like

I had to play around with the indentation for a bit since File Editor was complaining. This is what I ended up with that cleared the formatting errors.
But after a new trigger from the camera, the notification Channel is still ‘General’ and not using the new Low channel

sequence:
      - device_id: !input notify_device
        domain: mobile_app
        type: notify
        title: '{{ motion_name }} Detected'
        message: '{{ generated_content[''text''] }}'
        data: '{% set android_data = {"image": snapshot_access_file_path} %} {% set
          ios_data = {"attachment": {"url": snapshot_access_file_path, "content_type":
          "JPEG"}} %} {{ ios_data if is_ios else android_data }}

          '
      data:
        channel: Low
mode: single

This needs to be at one level lower than the data entry above, i.e. two spaces further to the right.
Apologies, my spacing might not have been correct in the example.

1 Like

So should I ignore this formatting error file editor is complaining about?

duplicated mapping key (125:9)

 122 |           "JPEG"}} %} {{ ios_data if is ...
 123 | 
 124 |           '
 125 |         data:
---------------^
 126 |           channel: Low

It still looks like it’s not intended far enough - it needs to be on the ‘next level down’ from the ‘data’ above, i.e. 10 spaces in.

1 Like

Yep I have the same.

Oddly every type of this I have the same. Image and “ai” is the previous event.

This is what I have now for channel and delay, file editor still complaining about indentation though:

      sequence:
      - device_id: !input notify_device
        domain: mobile_app
        type: notify
        title: '{{ motion_name }} Detected'
        message: '{{ generated_content[''text''] }}'
        data: '{% set android_data = {"image": snapshot_access_file_path} %} {% set
          ios_data = {"attachment": {"url": snapshot_access_file_path, "content_type":
          "JPEG"}} %} {{ ios_data if is_ios else android_data }}

          '
          data:
            channel: Low
      - delay: 60
mode: single

Notification comes in under General notification Channel and not Low.

I just sent a message through formatted like this and it created the channel ‘Low’ in my app:

          data:
            ttl: 0
            priority: high
            channel: Low
1 Like

I updated the yaml to:

      - device_id: !input notify_device
        domain: mobile_app
        type: notify
        title: '{{ motion_name }} Detected'
        message: '{{ generated_content[''text''] }}'
        data: '{% set android_data = {"image": snapshot_access_file_path} %} {% set
          ios_data = {"attachment": {"url": snapshot_access_file_path, "content_type":
          "JPEG"}} %} {{ ios_data if is_ios else android_data }}

          '
          data:
            ttl: 0
            priority: high
            channel: Low 
      - delay: 60
mode: single

And the notifications still arrive in General notification channel…

WAIT! Do I have to redeploy the blueprint after editing the blueprint yaml or does my existing automation inherit the blueprint yaml edits?

Bummer - I’m out of ideas now :frowning:

My understanding is that you run the risk of overwriting the manual changes you made if you redeploy the blueprint.

1 Like