I have found some posts that does training of models, but it goes pretty deep. It is not like the average ‘chuck images in here’ and away you go. You can always check this post which did training of a model.
For recognising faces we have several dedicated components, e.g. Facebox which runs locally. No one model will do everything well, so you will need to find the right combination of models. Also object detection is noisy, you could lower your probability threshold for detection, or experiment with the Bayesian sensor
Thanks Rob! That’s a fantastic component, almost exactly what I’m looking for without killing my Pi. Awesome job!
Any suggestions on how could implement this?
I have one camera with MotionEye so whenever there is motion it uploads an image to Google Drive and triggers an IFTTT rich notification that grabs the current still picture of my Generic Camera for the notification.(using HTTP post action from within MotionEye to IFTTT webhook).
My issue is the number of false positives. Sometimes a light, my robot vacuum or even a ghost triggers my alarm and I get flooded with useless notifications.
I’m thinking of an automation that triggers on HTTP request (which would come from motion detected HTTP post action from the MotionEye add-on) and the action would be calling the Scan service of Sighthound, so Sighthound it’s only triggered when there’s motion detected saving some resources on my Pi and also saving API calls.
Then another automation which is triggered by Detect_persons event from Sighthound that the action is HTTP post to IFTTT webhook so I get a rich notification with a picture of the event which triggered it.
Now what I haven’t figured out is how to get the picture in which Sighthound detected the person as obviously there will be a small delay (while Sighthound processes the image and triggers my second automation with the event detect_persons) and since I’d be grabbing the current still picture of my Generic Camera on the second automation which obviously changes every second so by the time is triggered it’s gonna be a few seconds later, and I will prob miss the snapshot of the person if they are just walking past the camera (~3-5 second window), so I’d like to retrieve the picture in which Sighthound detected the person. Any suggestions are welcome
Who you gonna call?
The HA-busters?
@gurbina93 I’ve not use Sighthound in production myself yet so can’t advise. Perhaps others have suggestions? Also we have a separate thread for Sighthound here.
Thanks
Hi @robmarkcole,
Thanks for the excellent guide. Tensorflow seems like an ideal solution for person detection in a room compared to classificationbox.
However, I have moved my HA from Raspberry Pi to Synology NAS in Docker.
I still cannot grasp the concept on how to apply this on my NAS.
Any help will be greatly appreciated. Thanks.
I’m not sure if it helps anyone, but I’ve got TensorFlow working with a bunch of components on Ubuntu running Docker. I’m not using a Pi. Here’s the guide. It’s not as complicated as the guide makes it look. I just documented everything as I was getting it working.
thanks. will continue our discussion over there.
Hi again Rob, I was wondering what else we might have different in our setup since it’s working for you but not for me (running out of memory). I also have nothing else set up or installed but this component.
Are you running Hassbian or manually installed Home Assistant?
Did you close any other processes on the pi?
Hi @FredF yes fresh Hassbian install on 3b+. I recommend you start with a fresh install and get tensorflow running before adding anything else.
Cheers
OK, thanks for you reply! Unfortunately I am trying on a fresh install without anything else added. Are you sure you haven’t done anything else? In your guide you mentioned terminating processes on the pie, did you do that and if so, which processes?
Something must be different on your pie since it is working for you
Have you set a long scan_interval, and tried triggering manually?
Yes, and it is when triggering the scan service the pi crashes.
Did you have to shut down any processes?
No I didn’t have anything to shutdown on Hassbian, that comment was for running on Raspbian. Honestly I am out of ideas why this isn’t running in your case, sorry
A last question… When triggering it manually, what entity should I choose?
The proposed one is “image_processing.tensorflow_local_file”, is that correct even though my entity in the config file is “camera.local_file”? (I have a local camera set up with that name, containing a picture of a car that is working)
That’s the correct entity. Im thinking your problem isn’t Tensorflow component, but perhaps the camera. Search your logs for any clues
OK! I tried setting the interval for 3 minutes and let it try last night. I actually got three times it did not crash (out of ~150 attempts). Each time the memory usage peaks before it crashes, so I guess there’s nothing to do – simply running out of ram. Just can’t figure out why it works for you… Anyway, thanks for all your support!
If someone else actually get it to work on a Raspberry pi, please let me know…!
As I said before, you might want to investigate different models, some might be more lightweight. Also what hardware are you running?
Yes, but I haven’t found one that’s more lightweight. I would like one only trained on people, that’s really all I need.
Hardware is Raspberry Pi 3B+.