That feature is in deepstack integration, I just need an hour to move it over to rekognition.
@robmarkcole thanks for the update. I have been playing with it. Just a comment. Probably is my fault, but the sensor only show 1 for the first target. For the second and following it show 0. I mean, if the fisrt target is “person” and the second one is “animal”, the sensor show 1 when identify a person and it mark it with the bounded box. But for the “animal” target (the second one) the sensor always is “0”. It looks that only the first target is taked into a account.
Done. Thanks.
I just released v2.3 which adds the ability to filter objects based using a region of interest (ROI). I am using this to detect when a car is parked in a specific part of the image, and ignore other parts of the image. Check it out!
EDIT: The readme has been updated and answers my questions.
Wow, so the Confidence value applies only to the ROI box?
And you can have a ROI for ‘car’ and a different ROI for ‘person’?
I gotta check this out
For multiple ROI you need to configure multiple entities
Ah ha. And there’s the use-case for having multiple yaml files: One of my cams will have a small ROI, but the other cams will not specify a ROI.
cc: @jeremeyi
I have it running, but I’m not seeing the green ROI box in the image. The person object is within the ROI and has a red bounding box (expected). I will continue to test when it is daytime.
An ROI must be configured - by default it is not show unless configured
I see what I did. I was sending the original image in my notification (the same image that’s sent to Rekognition). Setting save_timestamped_file to True helped me diagnosis this. All of the JPGs written by HASS-amazon-rekognition have the red, green, and yellow boxes. I just need to send the ‘latest’ jpg in my notification.
Very cool!
Release v2.5 adds back in the labels as an attribute, and squashes a bug. Events are now also published.
I have 2 projects using this integration. I am interested to know what other people are monitoring, please share info if you think its interesting/unusual.
- check when there is a car outside my house - get a notification if my spot is taken or free!
- Count the birds nesting in my loft, an endangered species called Swift
Ive updated my deepstack integration so it shares identical config with this rekognition (apart from auth of course). So if you want to tryout deepstack in place of rekognition its very straightforward. Comparison image below:
wow this is amazing!!! Which one is better or more accurate on an I7 NUC? I can’t wait to try it !!
Well Rekognition is running on AWS servers so the local machine is irrelevant
First of alle thanks for this great component!!
I have a problem with the folder watcher on the saved result file. There is an automation “display carport ai” which triggers a message when the file is modified but for some reason it triggers 3 times when there is only 1 result file.
Below is the log with the multiple triggers:
2020-05-26 19:58:29 INFO (SyncWorker_10) [custom_components.amazon_rekognition.image_processing] Rekognition saved file /config/amazon-rekognition/rekognition_carport_process_2020-05-26_19:58:29.jpg
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] Executing display carport ai
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Running script
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] Executing display carport ai
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] Executing display carport ai
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
2020-05-26 19:58:29 INFO (MainThread) [homeassistant.components.automation] display carport ai : Executing step call service
I think temporary files are created during the file sace process, or somehow the file is modified several times. folder_watcher can therefore be triggered multiple times. I didnt find a nice solution to this yet
Ok, I fixed it by listening for the event instead of the files changes.
Hello everyone!
I’m using this integration to determine what is on my kitchen table. I scan the kitchen table each morning and evening, and if it’s dirty, I get a notice to clean it up. The problem is that none of the objects set as a target are displayed. I can only see certain labels in the attributes. What could be the problem?
image_processing:
- platform: amazon_rekognition
aws_access_key_id: !secret_id
aws_secret_access_key: !secret_key
region_name: eu-west-1
save_timestamped_file: True
save_file_folder: /config/tmp/
targets:
- wood
- plywood
- nature
- white board
- texture
- white
- electronics
source:
- entity_id: camera.kitchen_table