Tensorflow step-by-step guide

i have it running on a nuc skullcandy. Average load of 1 (8 cores, so 1 core in use) and it runs every 10 seconds. Although nice, it is only good to check if some objects are there or not. You cannot detect a certain face with it or have it “live”. Easier to detect motion and trigger on that instead.

Setting the detection to not ‘flicker’ where as the coco simple one sometimes sees my car and sometimes not (so you have to work with stuff like, if at least the car is gone for xx minutes. Otherwise you get false positives. Anyway the higer version takes more cpu and i have a load of 2 (2 cores in use) and it does take some time.

What i have not seen yet (if even possible) is to only detect cars or persons. I am not referring to categories since as far as i can tell that only limits the json output but it still scans the picture with all the default things like cars, bicycles, people, etc. I presume if you only scan for persons or cars specific it would save cpu too no?

Sighthound is pretty darn fast and accurate too, adds in facial and mood/age/color etc. I wonder if NVR’s have the same snappyness as sighthound. I’ve seen some dahue’s that offer the same i believe.

Interesting… I have a PIR sensor in my camera so I only have to check when it senses motion. But maybe Sighthound is an easier fix… But it would be nice not having to send pictures to someone else though.

Yes, it seems like the categories are just filtering of the output. I couldn’t find any Tensorflow models specific to persons, if someone knows one please share : )

I have found some posts that does training of models, but it goes pretty deep. It is not like the average ‘chuck images in here’ and away you go. You can always check this post which did training of a model.

For recognising faces we have several dedicated components, e.g. Facebox which runs locally. No one model will do everything well, so you will need to find the right combination of models. Also object detection is noisy, you could lower your probability threshold for detection, or experiment with the Bayesian sensor

Thanks Rob! That’s a fantastic component, almost exactly what I’m looking for without killing my Pi. Awesome job!

Any suggestions on how could implement this?
I have one camera with MotionEye so whenever there is motion it uploads an image to Google Drive and triggers an IFTTT rich notification that grabs the current still picture of my Generic Camera for the notification.(using HTTP post action from within MotionEye to IFTTT webhook).
My issue is the number of false positives. Sometimes a light, my robot vacuum or even a ghost triggers my alarm and I get flooded with useless notifications.

I’m thinking of an automation that triggers on HTTP request (which would come from motion detected HTTP post action from the MotionEye add-on) and the action would be calling the Scan service of Sighthound, so Sighthound it’s only triggered when there’s motion detected saving some resources on my Pi and also saving API calls.
Then another automation which is triggered by Detect_persons event from Sighthound that the action is HTTP post to IFTTT webhook so I get a rich notification with a picture of the event which triggered it.
Now what I haven’t figured out is how to get the picture in which Sighthound detected the person as obviously there will be a small delay (while Sighthound processes the image and triggers my second automation with the event detect_persons) and since I’d be grabbing the current still picture of my Generic Camera on the second automation which obviously changes every second so by the time is triggered it’s gonna be a few seconds later, and I will prob miss the snapshot of the person if they are just walking past the camera (~3-5 second window), so I’d like to retrieve the picture in which Sighthound detected the person. Any suggestions are welcome

1 Like

Who you gonna call?

2 Likes

The HA-busters? :smiling_face_with_three_hearts:

@gurbina93 I’ve not use Sighthound in production myself yet so can’t advise. Perhaps others have suggestions? Also we have a separate thread for Sighthound here.
Thanks

1 Like

Hi @robmarkcole,

Thanks for the excellent guide. Tensorflow seems like an ideal solution for person detection in a room compared to classificationbox.

However, I have moved my HA from Raspberry Pi to Synology NAS in Docker.

I still cannot grasp the concept on how to apply this on my NAS.

Any help will be greatly appreciated. Thanks.

I’m not sure if it helps anyone, but I’ve got TensorFlow working with a bunch of components on Ubuntu running Docker. I’m not using a Pi. Here’s the guide. It’s not as complicated as the guide makes it look. I just documented everything as I was getting it working.

1 Like

thanks. will continue our discussion over there.

Hi again Rob, I was wondering what else we might have different in our setup since it’s working for you but not for me (running out of memory). I also have nothing else set up or installed but this component.

Are you running Hassbian or manually installed Home Assistant?

Did you close any other processes on the pi?

Hi @FredF yes fresh Hassbian install on 3b+. I recommend you start with a fresh install and get tensorflow running before adding anything else.
Cheers

OK, thanks for you reply! Unfortunately I am trying on a fresh install without anything else added. :neutral_face: Are you sure you haven’t done anything else? In your guide you mentioned terminating processes on the pie, did you do that and if so, which processes?

Something must be different on your pie since it is working for you :thinking:

Have you set a long scan_interval, and tried triggering manually?

Yes, and it is when triggering the scan service the pi crashes.

Did you have to shut down any processes?

No I didn’t have anything to shutdown on Hassbian, that comment was for running on Raspbian. Honestly I am out of ideas why this isn’t running in your case, sorry

A last question… :slight_smile: When triggering it manually, what entity should I choose?

The proposed one is “image_processing.tensorflow_local_file”, is that correct even though my entity in the config file is “camera.local_file”? (I have a local camera set up with that name, containing a picture of a car that is working)

That’s the correct entity. Im thinking your problem isn’t Tensorflow component, but perhaps the camera. Search your logs for any clues

OK! I tried setting the interval for 3 minutes and let it try last night. I actually got three times it did not crash (out of ~150 attempts). Each time the memory usage peaks before it crashes, so I guess there’s nothing to do – simply running out of ram. Just can’t figure out why it works for you… Anyway, thanks for all your support!

If someone else actually get it to work on a Raspberry pi, please let me know…! :slight_smile: