Improving Blue Iris with Home Assistant, TensorFlow, and iOS

Background

This an update to my previous post. The major reason for going down the previous path was that I didn’t want to expose Home Assistant to the Internet. However, with the release of remote access via Nabu Casa, it has become much less of a concern. Please note that a chunk of this will be a copy and paste from my other post.

Summary

I’ve used Blue Iris for quite a while and have multiple PoE cameras outside of the home. The functionality of BI is amazing, especially for the price. However, it doesn’t have a native method of detecting people, just motion, although the latest releases seem to be adding a framework for person detection. I don’t want my phone blowing up with motion alerts when it’s the kids outside or a tree blowing in the wind. The end result was combining Home Assistant, TensorFlow, and the Home Assistant iOS application to only send alerts when a person is detected while everyone is away from the home.

Please keep in mind that I run all of the components in Docker on Ubuntu, not Hass.io, so the instructions will be different if you’re running the latter.

Components

There’s a lot of components to this. I’ll cite as much as possible since this would become a small novel if I covered everything from start to finish. Here are the important parts:

Blue Iris Integration
I’ve previously written how to accomplish this. It’s fairly straight-forward and shouldn’t be too difficult if you already have Blue Iris and Home Assistant up and running.

Presence Detection
There’s a ton of coverage on this. I use the Life360 custom component, but anything will work. I haven’t tried the HA iOS location tracking yet. You’re free to use whatever works best. Please note that I use a template that shows Home or Away so I can change a bunch of underlying components without having to modify any of my automations.

Node-RED
I use this for all of my automations, specifically the node-red-contrib-home-assistant-websocket plugin. You need to install that module in the Node-RED container. The assumption at this point is that you’re familiar with Node-RED and have Node-RED talking to Home Assistant.

TensorFlow
This is done through the TensorFlow image processing component . It can be pretty damn taxing on the CPU and it doesn’t make sense to run it full-time. Instead, I just call the service when motion is detected by Blue Iris. I’d never used it before and it’s ability to identify objects, specifically people, is super impressive.

Home Assistant iOS
As mentioned above, I have this working with the remote access cloud, so it’s pointed to the remote URL in the iOS app. Instructions for the iOS app can be found here. It’s dead-simple to set up and took me about 3 minutes to configure.

Like I said, there’s a few moving pieces to this, but the end result is everything I’d hoped to accomplish.

Setup

Here’s the fun part. I’ll cover parts of this in detail when necessary, but a lot of it will refer to other setup guides. The assumption is that you already have BI up and running, Node-RED talking to Home Assistant, and some type of presence detection.

Configure the Cameras in Home Assistant
This is based on my previous guide. Home Assistant pulls the camera feeds directly from Blue Iris instead of the PoE camera. I have a separate yaml file for each of the 4 PoE cameras. I’d highly recommend !secret here but I’ll show you a mock-up of what it looks like. In the configuration.yaml file I have this:

camera: !include_dir_merge_list cameras

In the camera directory I have a yaml for each camera, with most looking like this, e.g. bdc.yaml:

- platform: mjpeg
  mjpeg_url: http://bi.home.local:23456/mjpg/BDC
  name: BDC
  username: insert_username
  password:  insert_password
  authentication: basic

Again, one yaml per camera. BI makes it easy to access them with the short names. The 23456 port was one I selected for the built-in BI web server.

Configure TensorFlow
Just follow the official guide to get it running. I used the faster_rcnn_inception_v2_coco model. The Home Assistant config resides in /opt/home-assistant and is the /config directory from the container’s perspective. My directory looked like this:

root@docker:~# ls -la /opt/home-assistant/tensorflow/
total 113516
drwxr-xr-x  4 root root     4096 Nov 22 22:52 .
drwxr-xr-x 23 root root     4096 Nov 22 23:12 ..
-rw-r--r--  1 root root      460 Nov 22 01:29 camera-tf.yaml
-rw-r--r--  1 root root       77 Nov 22 01:20 checkpoint
-rw-r--r--  1 root root 57153785 Nov 22 01:20 frozen_inference_graph.pb
-rw-r--r--  1 root root 53348500 Nov 22 01:20 model.ckpt.data-00000-of-00001
-rw-r--r--  1 root root    15927 Nov 22 01:20 model.ckpt.index
-rw-r--r--  1 root root  5685731 Nov 22 01:20 model.ckpt.meta
drwxr-xr-x  6 root root     4096 Nov 22 01:20 object_detection
-rw-r--r--  1 root root     3244 Nov 22 01:20 pipeline.config
drwxr-xr-x  3 root root     4096 Nov 22 01:20 saved_model

I have this in my configuration.yaml file:

image_processing: !include_dir_merge_list tensorflow

In the /opt/home-assistant/tensorflow directory, I have a file named camera-tf.yaml that contains the following:

- platform: tensorflow
  source:
    - entity_id: camera.fdc
    - entity_id: camera.fyc
    - entity_id: camera.byc
    - entity_id: camera.bdc
  file_out:
    - "/config/www/tensorflow/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/config/www/tensorflow/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
  scan_interval: 604800
  model:
    graph: /config/tensorflow/frozen_inference_graph.pb
    categories:
      - person

The scan_interval is set to a week. You really don’t need it to automatically scan anything, but it defaults to every 10 seconds if you don’t specify a value. The categories is important: if you don’t specify anything, it’ll scan for every known object. I only care about people and don’t need alerts for plants, chairs, tables, etc. Please note the file_out option isn’t really necessary for this, but I like having the snapshots to show what was detected.

At this point you can restart the Home Assistant container so it loads all of the components.

Building the Flow in Node-RED
Now the fun part: building the flow to tie everything together. I’m sure there’s fancy way to reduce the number of nodes here, but this was the easiest for me and it works just fine. At a high-level, you end up with this a flow that can read a dynamic set of cameras, assuming a naming scheme that I’ll outline:

It’s not as complicated as it looks. Please note the device naming scheme that I use for every corresponding component per camera:

binary_sensor.bdc_motion
camera.bdc
image_processing.tensorflow_bdc

All of the binary_sensor, camera, and image_processing components have a 3-letter identifier in the name that represents that camera, e.g. bdc. That’ll make more sense in a minute. The beautify of this is that you can add additionally cameras that will “just work” if you stick with the naming scheme.

Let’s start with the initial “Camera Motion” server-state-changed node:

2019-03-22%2022_59_17-Home%20Assistant

Dead-simple. It matches any camera with the 3-character identifier in the regex. It’ll work for any future cameras as long as everything matches the naming scheme I’d mentioned above. The flow never starts if the value is off, i.e. it only starts for an on value.

Next are two simple tests for when it’ll proceed with analysis. Basically, I don’t care about the alert if both my wife and I are home. However, I do want to get alerted if it triggers between 12AM and dusk even if we are both home. I use a presence template the spits out “Home” or “Away”, so you’ll update these to reflect whatever values that you use. The nodes are simple enough:

2019-03-22%2023_06_03-Home%20Assistant

The parallel one is just a time check:

Now we get into the meat of it. Once the camera node has triggered on motion and passed the presence and time checks, it goes into a change node. This simply uses a regex capture group of (\w{3}) to rename the entity_id in the msg from binary_sensor to the corresponding image_processing component. The value of msg.data.entity_id becomes the image_processing component, e.g. binary_sensor.bdc_motion to image_processing.tensorflow_bdc. Hint: (\w{3}) just says “match and store 3 alpha-numeric characters”, which happens to match my naming scheme.

2019-03-22%2023_12_59-Home%20Assistant

The next node calls the image_processing.scan service against the previously-defined entity. The {{data.entity_id}} is just a template that says to use that msg value for the entity_id. Not much else to it.

The next one is a template node. This was new territory and where I kept getting stuck. It’ll basically generate JSON based on the value you input. In this case, it simply takes the previous entity_id and creates an understandable output by the next node, i.e. “check the state of this entity”. This is the JSON from the screenshot:

{ "entity_id": "{{data.entity_id}}" }

…and we continue. It checks the state node of the image_processing device to see if there were any people. It halts if the value is 0. If people are detected, the flow will continue. “Entity ID” is intentionally left blank since it’s grabbed from the previous template.

2019-03-22%2023_28_47-Home%20Assistant

Next we’re back to a convert node. At this point we’re going to swap the image_processing entity to the equivalent camera entity. Remember that I’m using a consistent name for all 3 entities? This is why. It’s almost identical to the convert node listed above.

2019-03-22%2023_33_27-Home%20Assistant

Next we’re going to use another template node to generate data that is usable for a service call. This is the JSON from the screenshot:

{
   "data": {
      "message": "{{data.attributes.friendly_name}}",
      "title": "Person Detected",
      "data": {
         "attachment": {
            "content-type": "jpeg"
         },
         "push": {
            "category": "camera"
         },
         "entity_id": "{{data.entity_id}}"
      }
   }
}

It provides the entity_id and the name of the TensorFlow entity that flagged it to the service call.

…and finally, the service call you’ve all been waiting for. The only things you need to specify are the domain and the service. The msg has the rest of the data that it needs.

And that’s it! You’ll now get alerts on your phone via the Home Assistant iOS app. It shows a thumbnail of the camera. Best of all, you can hard-press the notification and it’ll pop-up the camera streaming in real-time. Much easier than flipping back to Blue Iris for the alerts.

Hope this helps!

4 Likes

@TaperCrimp thank you for this guide. I have a similar setup as you Ubuntu with Home Assistant running in a docker container. Quick question did you install TensorFlow on your Ubuntu host, is it also running in a separate docker container or did you install it within the Home Assistant container?