Did it on hassos install.
Went through the setup but it is showing as , anything i might be doing wrong?
I had to change the port number because 5000 is taken
Been running on Synology NAS and using Node-Red for alerting. Only used a single camera previously, but now expanding to all the cameras on the outside (had our first sunny day and the motion detection was going nuts with the shadows).
It seems a better way to go the setting scan_interval, is use the motion trigger then check if a person is detected before sending an alert. Going to give that a whirl and report back
Thanks for sharing, SUPER useful and great implementation for non AI folks to pick up!
My problem is that when motion is triggered the image scan in “Deepstack People Scan” does not use the image saved using the camera.snapshot in “Get picture from camera” it uses the picture that were created last time.
But if I after this manually push the timestamp trigger the image scan will use the picture that just were saved using the camera.snapshot service.
I have tried using a delay so that the file should have plenty of time to get saved to disk, but it did not change anything.
What could be wrong, and how should I change the flow?
I have installed Deepstack on a Hyper-v instant of Ubuntu with hass.io installed.
Did some more testing and it seems that the image.processing is only done when I push the timestamp button. Look at the time on this image. The component in home assistent was last updated 19:31 and not 19:38.
So DeepStack is processing an image from a file that doesn’t exist at the time DeepStack is called. Perhaps try deleting the old file (19:31 one) and the run the automation to be sure.
Can i ask what made you take the camera.snapshot and then also ask how you image_processing.scan that image? vs using the entity that is created called image_processing.entity and image_processing.scan that? or is the snapshot purely for sending in the notification?
with the below i get an entity called image_processing.location_person and I scan that - but equally interested to see if i can scan the image that was captured from motion
What I did was first make the snapshop and save it as a localfile. And then use the localfile as source for the image_processing.scan (person_detector)
That could be used using the image_processing.scan
But I do not know if I would get any advantage using it this way, since the notification will not use the same image as the image_processing.scan. That would make it a bit harder to debug.
Trying the deepstack docker image for a few days no.
It does a pretty good job in detecting the number of persons (when motion is detected, the event is triggered including a telegram message with the amount of persons detected).
It does however take quite some time for processing, but maybe that would be because of resources on the running host?