How are everyone using this component? Looking for ideas (with the limit to 5000)
Do you have control of the accuracy? like can you increase the probability from 50% to something like 80%? I am asking because I’m getting 1 person detected when there’s nothing but my tree’s shadow on the ground that is moving…
how does one create multiple entities with a different target?
You need to add multiple entries for the component config
Thanks for replying,
I did already try this but only the last entity gets generated by hassio
image_processing:
-
platform: amazon_rekognition
aws_access_key_id: xxx
aws_secret_access_key: yyy
region_name: eu-west-1 # optional region, default us-east-1
target: Person # Optional target object, default Person
source:- entity_id: camera.drivecam
-
platform: amazon_rekognition
aws_access_key_id: xxx
aws_secret_access_key: yyy
region_name: eu-west-1 # optional region, default us-east-1
target: Animal # Optional target object, default Person
source:- entity_id: camera.drivecam
Possibly a bug in HA, have you checked the issues?
Sorry it suddenly started working now, thankyou once again!
Its a miracle!
hehehe i think maybe suddenly i started working again…
This looks like a good solution for those of us using a raspberry pi, I’ll have a go at this over the weekend.
The deepstack component looks promising once we can run it on the raspberry pi.
Have run into the issue where none of my cameras have the still_image working, so it seems there is no image i can pass to rekognition.
i’m running wyze cameras with RTSP firmware, and a xioafang v2 with the hacked RTSP firmware. neither provide a still image I can use. I have tried the camera.snapshot service, but that just takes the still image that is provided (in my case nothing).
Edit: actually, via the Synology camera component I get still images (but no stream). Seems to work well so far in my basic testing!
I’ve had this working perfectly for spotting people, so I’ve added it to another camera to spot if my baby is in his crib. Strangely it says he isn’t even though it identifies a baby being in the room.
Any ideas why?
You have configured target: Baby
?
Hi Rob,
Yes, exactly the same config as I did for person but with Baby as target:
- platform: amazon_rekognition
aws_access_key_id: xxx
aws_secret_access_key: zzz
region_name: eu-west-1 # optional region, default us-east-1
target: Baby # Optional target object, default Person
source:
- entity_id: camera.nurserycam
I think there might be a bug on the image processing integration. Can you disable your person detector, leaving only the baby detector, and see if it works then?
That is strange, no idea
I guess I can work around it by just getting a list of the detected objects and ignore the state value. Has anyone done this?
Need some help with this please. I am not sure what I am missing. I get this error.
“An error occurred (UnrecognizedClientException) when calling the DetectLabels operation: The security token included in the request is invalid.”
configuration.yaml looks like this.
image_processing:
- platform: amazon_rekognition
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
target: Car # Optional target object, default Person
source:
- entity_id: camera.front_door
To get the aws id and secret I logged into aws and go to the IAM dashboard. Added a user and added them to a group that has the AmazonRekognitionFullAccess and AmazonRekognitionServiceRole policeies. Then in that user on the Security credentials I created an access key. I use that access key id for the AWS_ACCESS_KEY_ID and key that gets generated from that as the AWS_SECRET_ACCESS_KEY.
I am obviously missing something. Can some one point me in the correct direction.
Thanks