after setting this up it worked for about 8 hours then stopped. after investigating, i found that i was hitting an API limit. My front door is revolving with kids, farm animals, etc going by it all the time. Any ideas on how i overcome the limit? I’ve tried to find it in the quota settings on the Google AI API, but they are all set to 0.
Error: Error generating content: 429 Quota exceeded for quota metric ‘Generate Content API requests per minute’ and limit ‘GenerateContent request limit per minute for a region’ of service ‘generativelanguage.googleapis.com’ for consumer . [reason: “RATE_LIMIT_EXCEEDED” domain: “googleapis.com” metadata { key: “service” value: “generativelanguage.googleapis.com” } metadata { key: “quota_metric” value: “generativelanguage.googleapis.com/generate_content_requests” } metadata { key: “quota_location” value: “us-east7” } metadata { key: “quota_limit” value: “GenerateContentRequestsPerMinutePerProjectPerRegion” } metadata { key: “quota_limit_value” value: “0” } metadata { key: “consumer” value: } , links { description: “Request a higher quota limit.” url: “View and manage quotas | Cloud Quotas | Google Cloud” } ]
It will reset every minute for those quotas. You’d need to add logic for a cooldown with a helper srnsor to keep track of last request sent timestamp, etc.
Were you able to figure out to resolve this? I’m seeing the same error and it is not resetting for me.
How to get it to not send a notification if “No obvious motion has been detected”
I had to log into the Google API web console and submit an appeal request to have them activate my account again. I have since disabled the automation until I can figure out the right condition to set so it doesn’t run more than x number of times in a minute. I’ve experimented with a time since last state change, but don’t have a state change property that will work yet. My thought is within the automation, after the call to Google AI is made, that it creates a state. Then i will use that state to compare timestamps. And if that time is greater than 30 seconds, then run the action to Call Google AI, if less than 30 seconds, just send a notification to my phone only and not use Google AI. If I get something working, I will post back.
id like to add more devices to alert. does anyone know how?
I have this blueprint setup but the problem for me is that no one is ever in the snapshots. The snapshots (my guess) are either before the motion or after the motion so it’s taking snapshots of nothing.
For context, I’m using a ring pro 2 doorbell and this blueprint.
Any help would be greatly appreciated as I’ve been trying for a while to get this to work.
Great work on this. It works flawlessly for me, but I did get locked out of Google API calls because I went over their rate limit. My kids were testing the doorbell over and over LOL. Here is a possible modification you could make to keep people from getting locked out of the Google API due to too frequent of request. I modified the automation to check to see if the Google AI APi has been called in the last 30 seconds, if it has, it just sends a notification to my home assistant companion app saying that no AI analysis was due due to API rate limiting. If it’s been more than 30 seconds, then it processes x number of images through AI and send notification to phone. It depends on a helper to set the last_ai_call date time stamp.
I am new to blueprints, but it would be nice to have that 30 second interval be an option that anyone can set to any value they wanted.
Helper:
input_datetime:
last_ai_call:
name: Last AI Call
has_date: true
has_time: true
then restart Home Assistant.
Here is the edited automation yaml.
variables:
camera_device: camera.doorbell_fluent
camera_name: '{{ state_attr(camera_device, ''friendly_name'') }}'
camera_path: '{{ state_attr(camera_device, ''friendly_name'') | lower | replace('' '', ''_'') }}'
motion_sensor: binary_sensor.doorbell_motion
motion_name: '{{ state_attr(motion_sensor, ''friendly_name'') }}'
is_ios: false
num_snapshots: 3
snapshot_access_file_path: /local/snapshots/{{ camera_path }}_snapshot1.jpg
ai_prompt: >
Motion has been detected, compare and very briefly describe what you see in
the following sequence of images from my {{ camera_name }} camera. What do
you think caused the motion alarm? If a person, animal, or car is present,
describe them in detail. Do not describe stationary objects or buildings. If
you see no obvious causes of motion, reply with "Camera has detected motion
however no obvious motion observed comparing snapshots". Your message needs
to be short enough to fit in a phone notification.
mode: single
triggers:
- platform: state
entity_id: binary_sensor.doorbell_motion
to: 'on'
actions:
- if:
- condition: template
value_template: >-
{% set last_call = states('input_datetime.last_ai_call') %}
{{ last_call == 'unknown' or (as_timestamp(now()) - as_timestamp(last_call)) > 30 }}
then:
- service: input_datetime.set_datetime
data:
entity_id: input_datetime.last_ai_call
datetime: "{{ now().strftime('%Y-%m-%d %H:%M:%S') }}"
- repeat:
count: '{{ num_snapshots }}'
sequence:
- data:
filename: >-
./www/snapshots/{{ camera_path }}_snapshot{{ repeat.index
}}.jpg
target:
entity_id: '{{ camera_device }}'
action: camera.snapshot
- delay:
milliseconds: 500
- data:
prompt: '{{ ai_prompt }}'
image_filename: >
{% set snap_count = 3 %} {% set ns = namespace(images=[]) %} {% for
i in range(1, snap_count + 1) %}
{% set image = "./www/snapshots/" ~ camera_path ~ "_snapshot" ~ i ~ ".jpg" %}
{% set ns.images = ns.images + [image] %}
{% endfor %} {{ ns.images }}
response_variable: generated_content
action: google_generative_ai_conversation.generate_content
- choose:
- conditions:
- condition: template
value_template: >
{{ generated_content['text'] != 'Camera has detected motion
however no obvious motion observed comparing snapshots.' }}
sequence:
- device_id: c06a15b26c2575515d81fa43fdfa7d9f
domain: mobile_app
type: notify
title: '{{ motion_name }} Detected'
message: '{{ generated_content[''text''] }}'
data: >
{% set android_data = {"image": snapshot_access_file_path} %}
{% set ios_data = {"attachment": {"url":
snapshot_access_file_path, "content_type": "JPEG"}} %} {{
ios_data if is_ios else android_data }}
else:
- device_id: c06a15b26c2575515d81fa43fdfa7d9f
domain: mobile_app
type: notify
title: 'Motion Detected Recently'
message: 'Motion detected but no new AI analysis performed due to rate limiting.'
data: >
{% set android_data = {"image": snapshot_access_file_path} %}
{% set ios_data = {"attachment": {"url":
snapshot_access_file_path, "content_type": "JPEG"}} %} {{
ios_data if is_ios else android_data }}
id: '1732030627921'
alias: Camera Snapshot, AI & Notification on Motion
description: ''
````````
this would be a great addition- my flag shadow sets off the alerts often
can you help me figure out why my snapshots are not up to date with the motion that’s causing them?
I’m having the same issue with my Reoliink doorbell. motion is detected and sometimes the object or person is moving quickly and the snapshot it sends misses them or catches it on the edge of the frame. I’ve haven’t experimented yet, but if you camera has a person detection state (like the Reolink does) you could set the trigger for the automation to use that instead of the motion detected.
I also needed to take control so I can use my camera binary sensor for person detection.
To solve the API limit, I guess there is a even simpler method:
Configure the automation with:
mode: single
max_exceeded: silent
and add a delay to the action sequence:
delay: 60
Here is the edited automation, be aware, I added some instructions to be mean , also, I’m sending the notification using Discord, so I can see later the images, as HA iOS notifications are ephemeral, and finally I modified the template to ignore no obvious motion as it wasn’t working too good for me.
alias: Camera Snapshot, AI & Notification on Motion
description: ""
triggers:
- entity_id:
- binary_sensor.g4_doorbell_person_detected
to: "on"
trigger: state
actions:
- if:
- condition: []
then:
- repeat:
count: "{{ num_snapshots }}"
sequence:
- target:
entity_id: "{{ camera_device }}"
data:
filename: >-
./www/snapshots/{{ camera_path }}_snapshot{{ repeat.index
}}.jpg
action: camera.snapshot
- delay:
milliseconds: 700
- data:
prompt: "{{ ai_prompt }}"
image_filename: >
{% set snap_count = num_snapshots %} {% set ns =
namespace(images=[]) %} {% for i in range(1, snap_count + 1) %}
{% set image = "./www/snapshots/" ~ camera_path ~ "_snapshot" ~ i ~ ".jpg" %}
{% set ns.images = ns.images + [image] %}
{% endfor %} {{ ns.images }}
response_variable: generated_content
action: google_generative_ai_conversation.generate_content
- choose:
- conditions:
- condition: template
value_template: >-
{{ 'no obvious motion observed' not in
generated_content['text'] | lower }}
sequence:
- action: notify.discord_doorbell_channel
data:
message: ""
data:
embed:
title: "{{ motion_name }}"
description: "{{ generated_content['text'] }}"
images:
- >-
/config/www/snapshots/g4_doorbell_high_resolution_channel_snapshot2.jpg
- delay: 60
variables:
camera_device: camera.g4_doorbell_high
camera_name: "{{ state_attr(camera_device, 'friendly_name') }}"
camera_path: "{{ state_attr(camera_device, 'friendly_name') | lower | replace(' ', '_') }}"
motion_sensor: binary_sensor.g4_doorbell_person_detected
motion_name: "{{ state_attr(motion_sensor, 'friendly_name') }}"
is_ios: true
num_snapshots: 3
snapshot_access_file_path: /local/snapshots/{{ camera_path }}_snapshot1.jpg
ai_prompt: >-
Motion has been detected, compare and very briefly describe what you see in
the following sequence of images from my {{ camera_name }} camera, be very
mean and roast. The answer should in english then followed by the spanish
translation. Do not describe stationary objects or buildings. If you see no
obvious causes of motion, reply with "Camera has detected motion however no
obvious motion observed comparing snapshots".
mode: single
max_exceeded: silent
Hi All
I have this working, however i would like the words that are sent through on the notification to be also read out on my google speaker. Is there a simple enough way to do this please?
thanks
You’ll need to add a tts action service call with the generated text in the previous action, something like this in the sequence:
resource:
action: tts.speak
data:
cache: true
message: "{{ generated_content['text'] }}"
media_player_entity_id: media_player.kids_bedroom
target:
entity_id: tts.openai_tts_echo
For this ^, you’ll also need to take control of the blueprint so you can customize it.
This works great, however it always runs from the previous images as there is a slight delay of it processing the image files from the Ring doorbell.
Ive tried adding a delay before the sequence to process the images, but that doesnt help.
Is there anything else i can try?
Thank you for sharing this, with a bit of tweaking it will be great!
works perfectly thank you
Anyone got this working with Google Nest cams.
It appears the snapshot is blank due to the nest cams using WebRTC?: Snapshots from a Nest camera so I can do object detection / camera_view has to be set to live to see on dashboard