Thanks Rob! That’s a fantastic component, almost exactly what I’m looking for without killing my Pi. Awesome job!
Any suggestions on how could implement this?
I have one camera with MotionEye so whenever there is motion it uploads an image to Google Drive and triggers an IFTTT rich notification that grabs the current still picture of my Generic Camera for the notification.(using HTTP post action from within MotionEye to IFTTT webhook).
My issue is the number of false positives. Sometimes a light, my robot vacuum or even a ghost triggers my alarm and I get flooded with useless notifications.
I’m thinking of an automation that triggers on HTTP request (which would come from motion detected HTTP post action from the MotionEye add-on) and the action would be calling the Scan service of Sighthound, so Sighthound it’s only triggered when there’s motion detected saving some resources on my Pi and also saving API calls.
Then another automation which is triggered by Detect_persons event from Sighthound that the action is HTTP post to IFTTT webhook so I get a rich notification with a picture of the event which triggered it.
Now what I haven’t figured out is how to get the picture in which Sighthound detected the person as obviously there will be a small delay (while Sighthound processes the image and triggers my second automation with the event detect_persons) and since I’d be grabbing the current still picture of my Generic Camera on the second automation which obviously changes every second so by the time is triggered it’s gonna be a few seconds later, and I will prob miss the snapshot of the person if they are just walking past the camera (~3-5 second window), so I’d like to retrieve the picture in which Sighthound detected the person. Any suggestions are welcome
@gurbina93 I’ve not use Sighthound in production myself yet so can’t advise. Perhaps others have suggestions? Also we have a separate thread for Sighthound here.
Thanks
I’m not sure if it helps anyone, but I’ve got TensorFlow working with a bunch of components on Ubuntu running Docker. I’m not using a Pi. Here’s the guide. It’s not as complicated as the guide makes it look. I just documented everything as I was getting it working.
Hi again Rob, I was wondering what else we might have different in our setup since it’s working for you but not for me (running out of memory). I also have nothing else set up or installed but this component.
Are you running Hassbian or manually installed Home Assistant?
OK, thanks for you reply! Unfortunately I am trying on a fresh install without anything else added. Are you sure you haven’t done anything else? In your guide you mentioned terminating processes on the pie, did you do that and if so, which processes?
Something must be different on your pie since it is working for you
No I didn’t have anything to shutdown on Hassbian, that comment was for running on Raspbian. Honestly I am out of ideas why this isn’t running in your case, sorry
A last question… When triggering it manually, what entity should I choose?
The proposed one is “image_processing.tensorflow_local_file”, is that correct even though my entity in the config file is “camera.local_file”? (I have a local camera set up with that name, containing a picture of a car that is working)
OK! I tried setting the interval for 3 minutes and let it try last night. I actually got three times it did not crash (out of ~150 attempts). Each time the memory usage peaks before it crashes, so I guess there’s nothing to do – simply running out of ram. Just can’t figure out why it works for you… Anyway, thanks for all your support!
If someone else actually get it to work on a Raspberry pi, please let me know…!
You can either dedicate high performance hardware for HA or offload to task specific hardware. For instance OpenCV and tensorflow can be run directly on the camera module: https://openmv.io/