Improving Blue Iris with Home Assistant, TensorFlow, and Pushover [SIMPLIFIED!]

UPDATES
June 10, 2019: I’ve updated the guide to use a simpler flow with Amazon Rekognition over here.

Summary

I’ve used Blue Iris for quite a while and have multiple PoE cameras outside of the home. The functionality of BI is amazing, especially for the price. However, it doesn’t have a native method of detecting people, just motion. I don’t want my phone blowing up with motion alerts when it’s the kids outside or a tree blowing in the wind. The end result was combining Home Assistant, TensorFlow, and Pushover to only send alerts when a person is detected while everyone is away from the home.

Please keep in mind that I run all of the components in Docker on Ubuntu, not Hass.io, so the instructions will be different if you’re running the latter.

Components

There are a few components to this. I’ll cite as much as possible since this would become a small novel if I covered everything from start to finish. Here are the important parts:

Blue Iris Integration
I’ve previously written how to accomplish this. It’s fairly straight-forward and shouldn’t be too difficult if you already have Blue Iris and Home Assistant up and running.

Presence Detection
There’s a ton of coverage on this. I use Life360. You’re free to use whatever works best. Please note that I use a template that shows Home or Away so I can change a bunch of underlying components without having to modify any of my automations.

Node-RED
I use this for all of my automations, specifically the node-red-contrib-home-assistant-websocket plugin. The assumption at this point is that you’re familiar with Node-RED, have the module installed, and have Node-RED talking to Home Assistant.

Pushover
This should work with any of the notification platforms, but I’m a big fan of Pushover since Pushbullet no longer seems to be developed. IMO it’s outstanding and well worth the minimal cost. Please note: this is using a Pushover custom component found in this thread. Unfortunately the attachment functionality is not in default. You can just do the following while you’re in the root home-assistant directory:

mkdir -p custom_components/pushover_custom
cd custom_components/pushover_custom
wget -O notify.py https://raw.githubusercontent.com/brkr19/home-assistant/dev/homeassistant/components/notify/pushover.py

It also seems like the ability to overwrite default components was disabled in the 0.91.0 release. Using the component name above, you’ll want something like this in configuration.yaml notifications:

- platform: pushover_custom
  name: pushover_alert
  api_key: !secret pushover_api
  user_key: !secret pushover_user

You’ll need to restart the docker instance after making the change. Hopefully we’ll see the functionality baked into default.

TensorFlow
This is done through the TensorFlow image processing component. It can be pretty damn taxing on the CPU and it doesn’t make sense to run it full-time. Instead, I just call the service when motion is detected by Blue Iris. I’d never used it before and it’s ability to identify objects, specifically people, is super impressive.

Setup

Here’s the fun part. I’ll cover parts of this in detail when necessary, but a lot of it will refer to other setup guides. The assumption is that you already have BI up and running, Node-RED talking to Home Assistant, and some type of presence detection.

Configure the Cameras in Home Assistant
This is based on my previous guide. Home Assistant pulls the camera feeds directly from Blue Iris instead of the PoE camera. I have a separate yaml file for each of the 4 PoE Cameras. I’d highly recommend !secret here but I’ll show you a mock-up of what it looks like. In the configuration.yaml file I have this:

camera: !include_dir_merge_list cameras

In the camera directory I have a yaml for each camera, with most looking like this, e.g. bdc.yaml:

- platform: mjpeg
  mjpeg_url: http://bi.home.local:23456/mjpg/BDC
  name: BDC
  username: insert_username
  password:  insert_password
  authentication: basic

Again, one yaml per camera. BI makes it easy to access them with the short names. The 23456 port was one I selected for the built-in BI web server.

Configure TensorFlow
Just follow the official guide to get it running. I used the faster_rcnn_inception_v2_coco model. The Home Assistant config resides in /opt/home-assistant and is the /config directory from the container’s perspective. My directory looks like this:

root@docker:~# ls -la /opt/home-assistant/tensorflow/
total 113516
drwxr-xr-x  4 root root     4096 Nov 22 22:52 .
drwxr-xr-x 23 root root     4096 Nov 22 23:12 ..
-rw-r--r--  1 root root      460 Nov 22 01:29 camera-tf.yaml
-rw-r--r--  1 root root       77 Nov 22 01:20 checkpoint
-rw-r--r--  1 root root 57153785 Nov 22 01:20 frozen_inference_graph.pb
-rw-r--r--  1 root root 53348500 Nov 22 01:20 model.ckpt.data-00000-of-00001
-rw-r--r--  1 root root    15927 Nov 22 01:20 model.ckpt.index
-rw-r--r--  1 root root  5685731 Nov 22 01:20 model.ckpt.meta
drwxr-xr-x  6 root root     4096 Nov 22 01:20 object_detection
-rw-r--r--  1 root root     3244 Nov 22 01:20 pipeline.config
drwxr-xr-x  3 root root     4096 Nov 22 01:20 saved_model

I have this in my configuration.yaml file:

image_processing: !include_dir_merge_list tensorflow

In the /opt/home-assistant/tensorflow directory, I have a file named camera-tf.yaml that contains the following:

- platform: tensorflow
  source:
    - entity_id: camera.fdc
    - entity_id: camera.fyc
    - entity_id: camera.byc
    - entity_id: camera.bdc
  file_out:
    - "/config/www/tensorflow/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/config/www/tensorflow/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
  scan_interval: 604800
  model:
    graph: /config/tensorflow/frozen_inference_graph.pb
    categories:
      - person

The scan_interval is set to a week. You really don’t need it to automatically scan anything, but it defaults to every 10 seconds if you don’t specify a value. The categories is important: if you don’t specify anything, it’ll scan for every known object. I only care about people and don’t need alerts for plants, chairs, tables, etc. Most importantly, it writes out a file with _latest.jpg on the end to designated the latest results from the scan. This will be used later.

At this point you can restart the Home Assistant container so it loads all of the components.

Configure Node-Red
I’ll use screenshots here since dumping the flow is going to look like garbage :).

Building the Flow
Now the fun part: building the flow to tie everything together. You end up with this:

Let’s start at the top with Camera Motion. These binary_sensor components were created from the Blue Iris guide that’d I’d written above. It only activates with there’s motion which flips it to on. It looks like this:

2019-03-29%2010_17_08-Home%20Assistant

This is the regex from above: binary_sensor.\w{3}_motion

My cameras all have a 3-letter designation that the \w{3} matches, i.e. it’ll work with binary_sensor.abc_motion, binary_sensor.xyz_motion, etc. No need for a separate state node for each camera.

The next steps are basically logic checks: I only want to get alerted if everyone is away or it’s late at night. You can bake in whatever you’d like here. For the presence check, I’m using the following:

I’ve got a template sensor that only spits out Home or Away. If you don’t want to go that route, you could just create a node that says not Home. It’s important that those boxes in the end are unchecked. The time check just says to pass the value if it’s after midnight and before dawn.

The next Change node is labeled Convert and was a lot of trial and error. It basically takes the 3-letter identifier and injects the camera and image_processing entities into the payload so we can use them later. This was how I got around needing a separate flow for each camera.

Please note that you’ll need to specify JSONata for the to field. I’ll paste these out in order:

SET:
data.base_id
$match(topic, /binary_sensor.(\w{3})_motion/).groups[0].$string()

SET:
data.camera
"camera." & $.data.base_id

SET:
data.image_processing
"image_processing.tensorflow_" & $.data.base_id

DELETE:
payload

SET:
payload.entity_id
"image_processing.tensorflow_" & $.data.base_id

You now have one Change node that inserts all of the values you’ll need for processing. Next, onto the Call Service node for triggering the TensorFlow scan:

2019-03-29%2010_31_28-Home%20Assistant

This is the Entity ID from the service call that uses the previous value: {{data.image_processing}}

The next State node labeled Person Check checks to see if any people were detected in the scan. If there are no people detected then the flow terminates.

2019-03-29%2010_34_11-Home%20Assistant

Almost done :). We now use a Template node labeled Set Alert to put in the values we’ll pass to the notification. This is slightly specific to Pushover, but can easily be modified for any notification system that supports attachments.

This is the text from the template node:

{
	"data": {
		"message": "{{data.new_state.attributes.friendly_name}}",
		"title": "Person Detected",
		"data": {
			"file": {
				"path": "/config/www/tensorflow/{{data.base_id}}_latest.jpg"
			},
			"device": "MiniNater",
			"priority": "1"
		}
	}
}

Please note the path can be changed to wherever you keep the image files. This is from the perspective of the docker container, not where it actually resides on the file system.

The last step is to pass it to a Call Service node labeled Pushover. Everything is mostly set from the template, so there are only a few values you need to plug into the node.

The End Result

I now get super-accurate people alerts sent to my iPhone and Apple Watch if someone as near my house when nobody is around. It even shows me the image in the alert so I don’t have to open another app to check the cameras directly. I’ve had zero false positives since I got this configured. Much, much better than the motion alerts that I eventually stopped looking at. Here’s an example of the end result:

Please note the message and title will be slightly different since I redid the guide and didn’t feel like taking an updated screenshot. FYI the red box is from Blue Iris flagging motion. The yellow box is from TensorFlow. You can’t read it, but says person 99.8% at the top. Did I mention that it was accurate? :slight_smile:

Hope this helps someone!

36 Likes

@TaperCrimp

Thanks for this guide. However, I am very new to all of this.

First of all, does this need Internet connection to work? Will it send snapshots from my IP Cameras to the “cloud” for processing?

Do you mean, I need to go to my Synology Docker GUI and download the latest tensorflow image from the registry and then launch it as per Docker  |  TensorFlow ? The instruction at Docker  |  TensorFlow is quite confusing to me too. Since I only use Docker via the GUI, I have no idea how to get started.

Which example should I follow? CPU-only images or GPU-enabled images? Which one is recommended? My Synology NAS is self-built using GIGABYTE GA-H97N-WIFI Motherboard. It has Integrated graphic processor.

I haven’t used it in a NAS before, but does the Synology Docker GUI allow you to enter the container? From the command line it’d be something like docker exec -it container_name bash. Personally I use Portainer to manage mine. Once you’re in the container you can run those steps.

Close but no cigar :). You’d run the TensorFlow setup in the Home Assistant container. It doesn’t need a separate one. Does that UI give you the ability to run bash within the Home Assistant container?

I go to Details > Terminal > Create and see this.

Is that what you are referring to?

What’s next? Do execute pip3 install tensorflow==1.11.0

I think I finally got it thanks to your guide and @robmarkcole’s guide.

From the above terminal, I do this…

  1. pip3 install tensorflow==1.11.0
  2. Downloaded the zip file from https://github.com/robmarkcole/tensorflow_files_for_home_assistant_component
  3. Extract the content and put the tensorflow folder into HA config directory.
  4. cd /config/tensorflow
  5. curl -OL http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
  6. tar -xzvf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
  7. Finally, added this in the the HA configuration…

image_processing:
  - platform: tensorflow
    scan_interval: 20000
    source:
      - entity_id: camera.cam_dining
      - entity_id: camera.cam_couch
    model:
      graph: /config/tensorflow/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
      categories:
        - person

Now, it can detect the number person in the camera. Much more accurate than Classificationbox.

Gents, if you install a package in the container, wont it disappear when it restarts?

I have installed tensorflow inside the HA container and I have restarted the HA container multiple times and it is fine.

Thank you very much. It’ll be deleted when you upgrade.

using the tensorflow container will be the permanent fix for docker users.

You could be right. I will confirm it later.

thank you. Do share the result if possible.

I’ve got the HA docker container installed like this:

docker run -d --name="home-assistant" -v /opt/home-assistant:/config \
--restart always \
-v /etc/localtime:/etc/localtime:ro --net=host \
homeassistant/home-assistant

Most of what I need is in there, although I’d have to reinstall the python component.

EDIT: I took a look at the container and it has the following files:

root@docker:/usr/src/app# pip3 uninstall tensorflow
Uninstalling tensorflow-1.11.0:
  Would remove:
    /usr/local/bin/freeze_graph
    /usr/local/bin/saved_model_cli
    /usr/local/bin/tensorboard
    /usr/local/bin/tflite_convert
    /usr/local/bin/toco
    /usr/local/bin/toco_from_protos
    /usr/local/lib/python3.6/site-packages/tensorflow-1.11.0.dist-info/*
    /usr/local/lib/python3.6/site-packages/tensorflow/*
Proceed (y/n)? n

I’m guessing I could change export PYTHONUSERBASE=/config/deps to get the python components to install in the persistent volume. However, that wouldn’t include the other components unless they’re in the image by default. I’d love to run it in a standalone container, but getting the HA to query the TensorFlow API is likely beyond my abilities.

Thank you very much for the info. I have nearly the same.
The package will definitely need reinstalling when you upgrade HA as it doesn’t contain it.

that’s why we need to find out how to use their (tensorflow) container. :wink:

I guess an easy “patch” would be to add the command:

docker exec -it homeassistant /bin/bash pip3 install tensorflow==1.11.0

to whatever upgrade script you use

That’s probably a much better solution. I don’t feel like creating a docker-compose script and just went with this instead:

docker run -d --name="home-assistant" -v /opt/home-assistant:/config \
--restart always \
-v /etc/localtime:/etc/localtime:ro --net=host \
homeassistant/home-assistant
sleep 30
docker exec -it home-assistant /bin/bash pip3 install tensorflow==1.11.0
docker restart home-assistant

This would only be when I manually recreate it. I use watchtower to keep them updated and might have to work something into that.

I created a simple script to update hass. I also use watchtower but didnt want to automate it because of breaking changes. I manually upgrade after a release.

I just updated HA to the latest version by following the official guide for Synology Docker, i.e…

  1. Download the latest image.
  2. Stop the container.
  3. Clear the container.
  4. Start the container.

And the tensorflow component still works.

1 Like

Yeah, same here. I’m guessing the instructions are already outdated and it’s included by default.

It still works without any modifications on the 0.83.2 release. Looks like we’re good.

1 Like

I have hassio running on debian.

When I run docker exec -it homeassistant /bin/bash pip3 install tensorflow==1.11.0

I get the following errors:
/usr/local/bin/pip3: line 4: import: command not found
/usr/local/bin/pip3: line 5: import: command not found
/usr/local/bin/pip3: line 7: from: command not found
/usr/local/bin/pip3: pip3: line 10: syntax error near unexpected token (' /usr/local/bin/pip3: pip3: line 10: sys.argv[0] = re.sub(r’(-script.pyw?|.exe)?$’, ‘’, sys.argv[0])’

Any suggestions how to fix these errors.