Improving Blue Iris with Home Assistant, TensorFlow, and Pushover [SIMPLIFIED!]

I don’t believe you need to run that command in 0.83.1 or 0.83.2, at least from my experience.

thank you. Great news. No need to hack it.

I am running dockerized 0.83.3, and after using the gist (creates the appropriate directories with data ,protos, utils and does the compile), and trying multiple models, home-assistant simply crashes on boot whenever i’ve included the image processing platform. relevant part of configuration.yaml below. I’ve tried turning on debug logging, but the only message i get in the logs before crashing is ‘Setting up image_processing.tensorflow’. ive tried ‘faster_rcnn_inception_v2_coco_2018_01_28’, ‘ssdlite_mobilenet_v2_coco_2018_05_09’ and ‘ssd_mobilenet_v2_coco_2018_03_29’'s frozen_inference_graph.pb - all with same results (i just copy that pb file right into the config/tensorflow/ directory as indicated in my configuration yaml). This is a self-built intel-based machine with 16gb of ram. any ideas??

image_processing:
 - platform: tensorflow
   scan_interval: 20000
   source:
     - entity_id: camera.webcam
   model:
     graph: /config/tensorflow/frozen_inference_graph.pb
     categories:
       - person

Honestly not sure, I haven’t had any problems. Does it start just fine with the image_processing component commented out? Here’s the md5sum of frozen_inference_graph.pb if that helps:

1f1902262c16c2d9acb9bc4f8a8c266f frozen_inference_graph.pb

Yip - starts fine when that part above is commented out. I checked the md5sum on my frozen_inference_graph.pb and it matches yours.

must be some sort of interdependency i guess? assuming you all have it running in 0.83.3

Have you tried changing the logging level to debug? It shouldn’t be too difficult to spot where it’s crapping out. I’m on the latest release without any issues.

Ok - i tried it with debug on… and nothing obvious to me…

here is my debug log.

http://sprunge.us/40VZn8

Hrmmm… nothing really stands out either. It seems to load the component. Try commenting out the categories section and see if that does it.

This is a great write up. I had to make some changes with the node red config. If I piped the scan node right into the get state mode, it won’t trigger sometimes. If I just used the state change trigger instead it works reliably. I am running on a raspberry pi so make it takes a little longer for the state to propagate vs when it says it’s done scanning.

1 Like

Hi TaperCrimp - i tried commenting out categories, no change.
I also tried starting with a vanilla docker of home assistant, and it still didn’t work
But there it said that tensorflow was compiled for a computer with AVX, and this cpu doesn’t have it (J4205).
I am guessing this is the issue…

I believe those are CPU specific so that’s probably the case. I think it’s just the matter of finding the correct one. Unfortunately my knowledge of everything else basically ends at the original post :).

was definitely the issue.

installing tensorflow 1.5 (the last without required avx support) does it and works with home assistant. You can also compile tensorflow from scratch, but i couldnt figure out how to do that.

Nice, glad to hear you figured it out. The whole process is still working great on the 0.84.6 release.

Hopefully everyone reading this has found it helpful. I haven’t made any changes since the initial post, minus adding criteria to send me an alerts if there’s anyone on camera between midnight and dawn. Zero false-positives to the point that I rely on this instead of the camera motion alerts.

Thanks for this post. I think I’m going to go this route, just need to get a 1u rack server and the PoE Cameras :smiley: Have you fiddled with outdoor cameras being able to automatically zoom and pan to where the motion is?

Mine are fixed so there really isn’t a need, plus they record motion in HD. The Pushover alert basically acts as the notification. I can still get to the raw footage if I need it.

Wow, this looks great, looking forward to testing it out.

I’ve implemented this and it works great - during the day. However, during the night-time when the images are black & white, tensorflow pretty much misses everything no matter how obvious with very few exceptions. Based on anecdotal checking, it’s probably ‘catching’ only 5% of legitimate video feeds that contain a person. During the day, it’s probably correctly catching closer to 90% of the feeds it receives.

Is there some way to make this more effective for night-time B&W videos? As it stands right now, it’s pretty much useless for night-time (B&W) videos.

That I do not know. I haven’t had too many problems with my cameras at night as long as it’s a clear picture. One of the cameras isn’t in the best location, but the others seems to work as expected at even with the black and white images. I tried screwing around with the tensorflow settings a while back and basically left it as-is since I’m running HA in docker.

I am seeing the same behavior here, during the day it is pretty spot on at night time almost non-existent unless standing still. I wonder if this has more to do with fps settings or something else?