I was just screwing around with it and realized you no longer need the confidence checks since they can be done directly in the integration config. I removed both and it worked as expected.
Works great.
Thank you
I can’t seem to figure out what I am missing but whenever I try to update to anything higher than version 1.8 (in hacs) of the Amazon Rekognition integration my image rekognition entities become unavailable. 1.8 fine, 1.9 and higher no dice. This is my config.yaml entry:
- platform: amazon_rekognition
aws_access_key_id: ****
aws_secret_access_key: ****
region_name: us-east-1
scan_interval: 604800
source:
- entity_id: camera.dwc
Any ideas on what I am missing?
Can you send a log from the rekognition integration? there’s probably an error thrown in there.
Add to configuration.yaml:
logger:
default: info
logs:
custom_components.amazon_rekognition: debug
Restart HA.
Then read the log for an error related to amazon_rekognition:
HA web interface > Developer Tools > Logs
scroll to the bottom and click button “Load full HA log”
Can find in page:
ctrl-f
for
“rekog”
Here are the lines from my logs:
2020-05-07 22:17:51 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for amazon_rekognition which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-05-07 22:17:51 INFO (MainThread) [homeassistant.components.image_processing] Setting up image_processing.amazon_rekognition
2020-05-07 22:17:51 INFO (MainThread) [homeassistant.components.image_processing] Setting up image_processing.amazon_rekognition
2020-05-07 22:17:51 DEBUG (SyncWorker_5) [custom_components.amazon_rekognition.image_processing] boto_retries setting is 5
2020-05-07 22:17:51 INFO (MainThread) [homeassistant.components.image_processing] Setting up image_processing.amazon_rekognition
2020-05-07 22:17:51 DEBUG (SyncWorker_4) [custom_components.amazon_rekognition.image_processing] boto_retries setting is 5
2020-05-07 22:17:51 DEBUG (SyncWorker_2) [custom_components.amazon_rekognition.image_processing] boto_retries setting is 5
2020-05-07 22:17:51 INFO (SyncWorker_5) [custom_components.amazon_rekognition.image_processing] boto3 client failed, retries=0
I see where it failed, but not sure what to do about it (except revert to 1.8) : )
That was on version 2.2 by the way.
@jeremeyi, there should be more lines after boto3 client failed, retries=0
For example, boto3 client failed, retries=1
(and possibly =2, =3, =4)
Pls include ALL log lines that contain rekog
I just updated to v2.2 and it works ok for me. Here’s my config:
bash-5.0# cat image_processing/amazon_rekognition_all_cams.yaml
- platform: amazon_rekognition
aws_access_key_id: !secret aws_access_key
aws_secret_access_key: !secret aws_secret_key
region_name: us-west-1
save_file_folder: /config/www/camera_snapshot/
#targets:
# - person
confidence: 60
scan_interval: 604800
source:
- entity_id: camera.cam_bkyrd
# Note, you don't want to define `name` here because the entity_id will
# no longer be in cam_abcde format! It would be `backyard_cam`
#name: Backyard cam
- entity_id: camera.cam_drvwy
- entity_id: camera.cam_entry
As mentioned in an earlier post, I recommend using just one .yaml file and adding all cameras like so:
source:
- entity_id: camera.cam_bkyrd
- entity_id: camera.cam_drvwy
- entity_id: camera.cam_entry
If you want to include the Confidence % from rekognition in your notification (like I do), then be aware that aws_data.attributes.Person
(original post of this thread & now removed) is now aws_data.attributes.person
Note, the lowercase p
Nope, did it again, that is all of the log lines referring to rekog. BUT (and this is my bad) I didn’t notice that the newer versions changed the entity id from image_processing.rekognition_person_dwc to image_processing.rekognition_dwc. So it does look like it was working the whole time I just didn’t catch the renamed entities. Now I’ll work on implementing the multiple sources into one config and the confidence into the config. Thanks for taking the time to look into it.
v2.3 of HASS-amazon-rekognition was released today and it has a cool optional feature:
Region of Interest, which allows you to define a ROI box that is smaller than the camera view.
Only ‘person’ (or ‘car’ or your other target) objects within the ROI will be counted and returned as state.
This is great because I often get >90% person detections in areas that I don’t care about and don’t want alerts for. Ditto for target ‘car’.
In node red, the entity (ie. aws_data) looks like:
In this rekognition output, you can see that 1 person was detected OUTSIDE the ROI:
I wrote a NR function-node to extract the ‘person’ confidence values (eg. “92.56,83.2” if two persons were detected in the ROI) and stuff them in the payload for a downstream iOS notification node to include in the notification. The function-node is named “Extract confidence” and its upstream node is “Person check” just like in this thread’s original post.
// Start of function node code.
function get_confidence(obj) {
if (obj["name"] == "person") {
return obj["confidence"];
}
}
arr = msg.aws_data.attributes.objects;
// Construct a string of confidence values.
confidences = arr.map(get_confidence).filter(c => c !== undefined).join();
msg.payload.person_confidences = confidences;
return msg;
// End of function node code.
Separating cameras into yaml files (again):
Oh, and because the ROI is unique to each camera, I had to go back to having a separate yaml file for one of my cameras. My other cameras do not have a user-defined ROI and are in one file together.
Nice! I’ll have to play around with that a bit more.
If you don’t see bounding boxes around objects with “person”, “car”, labels in your notifications, you may want to read this.
The original how-to sends the image before rekognition to Pushover (for iOS notifications, read on). It doesn’t send the image with object bounding boxes that were returned by Amazon Rekognition. The bounding boxes are written to a new image by the HASS-amazon-rekognition component.
The HASS-amazon-rekognition component creates .jpg files with this filename format:
rekognition_<entity_name>_latest.jpg
eg. rekognition_cam_abc_latest.jpg
For the original post in this thread, this maps to:
"rekognition_" + msg.data.base_id + "_latest.jpg"
So the pushover preparation (function node) would look something like:
msg.payload = "Sensor: " + msg.data.new_state.attributes.friendly_name + "\n" + "Person Confidence: " + msg.person_confidence + "%";
msg.topic = "Person Detected";
msg.device = "YourPushoverDeviceName";
msg.priority = 1;
msg.image = "/config/www/camera_snapshot/rekognition_" + msg.data.base_id + "_latest.jpg"
return msg;
Assuming you configured amazon_rekogntion to write files to the same directory:
save_file_folder: /config/www/camera_snapshot/
For iOS notifications, the call_service node (domain=notify
, service=mobile_app_myphone
) has Data
field:
{
"message": "Person {{ payload.person_confidences }}%: {{ payload.friendly_name }}",
"data": {
"attachment": {
"url": "https://yourdomain.duckdns.org:12345/local/camera_snapshot/rekognition_{{ payload.friendly_name }}_latest.jpg"
}
}
}
Where a previous node copied msg.data.base_id
to msg.payload.friendly_name
Now my notifications show the ROI (green box) and red/yellow boxes of all detected objects.
Thanks again for all of the work you’ve put into updating my instructions. I wrote this a while ago and never changed it as the component was updated. At some point I’ll need to overhaul these.
Instructions updated!
You can update them again now, as the plugin is now event based, and supports multiple targets
So an example flow of detecting cars and persons would be welcome
So which one is the most recent one? And what are the diffs with the old ones…
For Blue Iris Video Stream tweaks you can go here to not reduce your image quality.
I am still new to home assistant and nodered, however instead of increasing the quality of the video stream from 50% to 100% as outlined in the post above. The below link would give you an image snapshot of 100% quality. You would still need to adjust the Resize output frame width setting and maybe zero frame latency…
http://:/image/?q=100
e.g.
http://192.168.2.216:81/image/FP1?q=100
How would I set that up instead of using your code to convert from mjpg to jpg?
Here’s some good news from the developer of Bue Iris, post 12
How do I get the snapshot part to work, I followed the instructions in the original post and I am not seeing the snapshot generate and I cannot see how to trigger the snapshot
The upstream node (usually Template type) from the Snapshot node sets the entity-id of the camera that is to do the snapshot. In my flow the upstream Template node is:
{
"data": {
"entity_id": "{{data.camera}}",
"filename": "/config/www/camera_snapshot/snapshot_{{data.base_id}}.jpg"
}
}
One way to troubleshoot this is to put a Debug node after the Template node and check the value of msg.data.camera (the entity-id). Then ensure that entity-id matches your camera’s entity-id.
Then, check that /config/www/camera_snapshot/
is a legit directory and see what files are there after manually triggering a camera motion event with Developer Tools > MQTT.