Needed Google to translate the error - seems to be “Action Unknow” or something similar.
So making an assumption, but is seems your automation “automation.camera_1_snapshot_on_motion” is referring / calling a script that doesn’t exist. May me as simple a missing letter, in the name or the script is called something different.
Great thanks to Aussi_Adam! I have multiple cameras on my system and didn’t want to have multiple sets of automations and scripts to maintain so I made some slight modifications to the Automation and script to handle an arbitrary number of cameras.
Automation:
alias: Unifi Cameras Motion AI SMS Snapshot
description: ""
triggers:
- trigger: state
entity_id:
- binary_sensor.ai_pro_driveway_motion
- binary_sensor.ai_pro_frontyard_motion
- binary_sensor.ai_pro_backyard_motion
# add additional cameras here
from: "off"
to: "on"
conditions: []
actions:
- action: script.camera_snapshot_ai_notification
metadata: {}
data:
camera_id: "{{ device_id(trigger.entity_id) }}"
camera_name: "{{ device_attr(trigger.entity_id, 'name') }}"
mode: single
And here is the modified Script to go along with it:
alias: Camera - Snapshot AI & Notification
sequence:
- metadata: {}
data:
filename: ./www/snapshots/{{ camera_name }}_snapshot1.jpg
target:
device_id: "{{ camera_id }}"
enabled: true
action: camera.snapshot
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
enabled: true
- metadata: {}
data:
filename: ./www/snapshots/{{ camera_name }}_snapshot2.jpg
target:
device_id: "{{ camera_id }}"
enabled: true
action: camera.snapshot
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
enabled: true
- metadata: {}
data:
filename: ./www/snapshots/{{ camera_name }}_snapshot3.jpg
target:
device_id: "{{ camera_id }}"
enabled: true
action: camera.snapshot
- metadata: {}
data:
prompt: >-
Motion has been detected, compare and very briefly describe what you see
in the following sequence of images from my {{ camera_name }} camera.
What do you think caused the motion alarm? If a person or car is
present, describe them in detail. Do not describe stationary objects or
buildings. If you see no obvious causes of motion, reply with "No
Obvious Motion Detected." Your message needs to be short enough to fit
in a phone notification.
image_filename:
- ./www/snapshots/{{ camera_name }}_snapshot1.jpg
- ./www/snapshots/{{ camera_name }}_snapshot2.jpg
- ./www/snapshots/{{ camera_name }}_snapshot3.jpg
response_variable: generated_content
action: google_generative_ai_conversation.generate_content
- if:
- condition: template
value_template: "{{ 'No Obvious Motion Detected.' in generated_content.text }}"
then:
- stop: ""
else:
# The following action and delay are needed for me because of the SMTP
# integration I'm using for notify. It complains about the MIME-type of the snapshot
# JPEGs. fixup_jpeg calls ImageMagic to re-encode the jpg file which fixes the
# problem.
- action: shell_command.fixup_jpeg
data:
image_file: ./www/snapshots/{{ camera_name }}_snapshot2.jpg
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
- metadata: {}
data:
title: "{{ camera_name }} Motion Detected"
message: "{{generated_content['text'] }}"
data:
images:
# SMTP integration fails when using ./local/ path here for inclusion in
# MMS message. Probably ./local/ only works for HTML embedding.
- /config/www/snapshots/{{ camera_name }}_snapshot2.jpg
action: notify.family_sms
mode: single
description: ""
I’ve been using this setup fairly successfully until today, when I started getting 503 errors (basically server not responding), and latency of up to 10 minutes waiting for a reply from Gemini AI. This wasn’t me sending too much data, it appears to be a failure on their side.
Either way, if the camera detects motion, I’d still like to get a notification, and if AI can’t analyze it, I’ll do it myself visually. I don’t want my script to sit there for ten minutes doing nothing. Here’s roughly what I’d like to do:
Send 3 images to Gemini with the prompt
Wait no more than 10 seconds for an answer
if {{ generated_content == “” }}, send the 2nd image with a message “AI failed”
otherwise, send the AI generated_text as a message along with the image in a notification.
Is there a way to “wrap” the Gemini AI call in a timeout function or something?
Hi all, I’ve had the similar violation error - but have appealed and I think successfully resolved.
I am though having problems sending the JPG image to Telegram. It’s saving the image and I’m getting the analysis done in Gemini. But I still can’t get it to send the image. Any ideas?
Here is my automation
id: '**xxxxxxx**'
alias: Camera 1 - Snapshot on motion
description: Reolink Camera Snapshot
triggers:
- type: motion
device_id: **xxxxxx**
entity_id: **xxxxxx**
domain: binary_sensor
trigger: device
conditions: []
actions:
- action: script.camera_driveway_1_snapshot_ai_notification
metadata: {}
data: {}
mode: single
And here’s the script to go with it:
camera_driveway_1_snapshot_ai_notification:
alias: Camera - Driveway 1 - Snapshot, AI & Notification
sequence:
- metadata: {}
data:
filename: ./www/snapshots/driveway1_snapshot1.jpg
target:
device_id: **xxxxxxx**
enabled: true
action: camera.snapshot
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
enabled: true
- metadata: {}
data:
filename: ./www/snapshots/driveway1_snapshot2.jpg
target:
device_id: **xxxxxxxx**
enabled: true
action: camera.snapshot
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
enabled: true
- metadata: {}
data:
filename: ./www/snapshots/driveway1_snapshot3.jpg
target:
device_id: **xxxxxxxxx**
enabled: true
action: camera.snapshot
- metadata: {}
data:
prompt: 'Motion has been detected, compare and very briefly describe what you
see in the following sequence of images from my driveway camera number 1.
What do you think caused the motion alarm? If a person or car is present,
describe them in detail. Do not describe stationary objects or buildings.
If you see no obvious causes of motion, reply with "No Obvious Motion Detected."
Your message needs to be short enough to fit in a phone notification. '
image_filename:
- ./www/snapshots/driveway1_snapshot1.jpg
- ./www/snapshots/driveway1_snapshot2.jpg
- ./www/snapshots/driveway1_snapshot3.jpg
response_variable: generated_content
action: google_generative_ai_conversation.generate_content
- if:
- condition: template
value_template: '{{ ''No Obvious Motion Detected.'' in generated_content.text
}}'
then:
- stop: ''
else:
- metadata: {}
data:
title: Driveway 1 Motion Detected
message: '{{generated_content[''text''] }}'
data:
file: /local/snapshots/driveway1_snapshot2.jpg
chat_id: **xxxxxxxx**
action: notify.homeassistant
mode: single
description: Reolink Doorbell
Quick heads up with a setup like this if your using Cloudflare.
Cloudflare is set to default to cache file types, so you might keep getting the same old image sent to your phone, if the file name never changes.
It can be fixed on the cloudflare console to stop caching *.JPG or what ever file format you are using.
I don’t believe a 5XX error is anything you can/need to appeal with Google. In web dev, it’s an internal service error, which means something went wrong on their side. A 4XX error would generally be something about permissions or rate limiting.
What I don’t like is that there’s no timeout on this integration - last time this happened the script sat for 10 minutes. Since I have it in single mode, no other iterations were allowed to run. I wonder if I add a delay to the automation and then allow the script to run in parallel mode (instead of single) I might get notifications without the entire thing hanging.
This is brilliant, thank you. You’ve helped me reduce 8 automations and identical scripts down to one of each and made me slightly less terrible at script variables and templating at the same time.
I want to add that I’m using script.turn_on instead of calling the script directly because I’m experimenting with workarounds for when Gemini is unresponsive. I’m calling the automation in queued mode with a 5 second delay (for rate-limiting) and the script in parallel mode so if one fails to run there’s a chance the next one will work. Supposedly using the script.turn_on means the automation won’t wait for the script to finish before running again (I’ve only read that, but haven’t tested it).
The YAML to pass variables to the script with the script.turn_on call is slightly different:
got the same error today
Error: Error generating content: 429 Quota exceeded for quota metric ‘Generate Content API requests per minute’ and limit ‘GenerateContent request limit per minute for a region’ of service ‘generativelanguage.googleapis.com’ for consumer ‘project_number:1034246050621’. [reason: “RATE_LIMIT_EXCEEDED” domain: “googleapis.com” metadata { key: “service” value: “generativelanguage.googleapis.com” } metadata { key: “quota_metric” value: “generativelanguage.googleapis.com/generate_content_requests” } metadata { key: “quota_location” value: “europe-west4” } metadata { key: “quota_limit” value: “GenerateContentRequestsPerMinutePerProjectPerRegion” } metadata { key: “quota_limit_value” value: “0” } metadata { key: “consumer” value: “projects/1034246050621” } , links { description: “Request a higher quota limit.” url: “View and manage quotas | Cloud Quotas | Google Cloud” } ]
Result:
I only have 1 camera and , i have only had 20 responces. I have had the blueprint installed for only 3 days, and the camera triggers 6 or 7 times a day?
I got my project reinstated by Google after 2 appeals, but now I’m getting “[google.api_core.retry] Retrying due to 503 The model is overloaded. Please try again later.” errors practically every time I run my test script.
Perhaps Google Generative AI has become overloaded? I’ve read on SO that switching to Vertex AI has helped people get past this, but I’m not sure if/how we can even do that.
I have 1 question, based on the results from Google, is it possible to run an automation to flash lights or something. I have just left my dog in the garden for 30 minutes while making dinner and didnt look at my phone to the notifications about a dog in the garden…
Just FYI. The 429 error is because Google’s servers are being used by premium clients. They have “provisioned throughput subscriptions” and such.
I have billing enabled, am paying to use Gemini and also getting 429. Been getting it since 1130AM and it is now 430PM.
If you don’t have a Provisioned Throughput subscription and resources aren’t available to your application , then an error code 429 is returned. Although you don’t have reserved capacity, you can try your request again.
Yes, I’m getting the same even though I enabled billing for my project. I sent one request to Gemini and got a 429 a few hours ago. This is now completely useless. I have figured out a way to bypass the AI and send me the image with an alert saying “No AI Available” in a pinch (I’ll post that this afternoon in case it helps anyone), but what are people using as an alternative? I’ve seen OpenAI mentioned but I haven’t looked into it yet.
I’d be perfectly happy to pay for my miniscule usage of a service like this, but I’d like it to actually work.
Set up an API key for OpenAI/ChatGPT, put a few dollars into a billing account, and so far it’s working GREAT. Didn’t require too many tweaks to the existing script - just changing the path for the images to send to AI and the generated_content[‘text’] is now generated_content[‘response_text’]. The beauty of how this was set up by people above is that I only had to alter the script; the automation/triggers are the same.
Thrilled to have a working AI image analyzer again and have no issues paying for it - I think I’ve sent it around 10-15 images so far for testing and I’ve spent about 4 cents.
Plus, OpenAI lets you implement true billing caps, unlike Google, so you can really cap your spend instead of hoping something doesn’t go awry while you’re on vacation, etc.
At first I wasn’t sure about adding another layer to this (with another integration) but having an abstraction layer between my script and the AI means it should be easier to switch AI providers later if (ahem) someone manages to buy OpenAI and turn it into something that doesn’t work.