Tensorflow, Arlo, Docker - Automate a secure(ish) home

-work in progress-

Intention

We recently moved into a house and we wanted to secure our new home as conveniently as possible. Personally, I do not like (paid) cloud systems for personal applications. Nonetheless, these solutions come with some nice benefits. So we started by getting some Arlo cameras and installed them around the house. So far it worked pretty ok. At least for the first year, when I did not have to pay for all the “smartness” like people detection in the cloud.

This continued to bug me quite a bit, so I searched the web for possible solutions that also satisfy my “maker-self” to some extend…

After some searching I found the great work of @sherrell, from my job I am somewhat familiar with Tensorflow and ML in general, so I decided to give my own security (person, car, things-detection) system a try.

System Setup
This is actually pretty straight forward and simple. In general, this will also work with any given setup.

  • Cameras: Arlo Ultra (2x) and Arlo Pro 2 (2x)
  • NAS-System: DS216+II (Intel Celeron based, I upgraded to 8GB RAM(!))
  • My wife and I both own iPhones and Apple Watches

First installation

Home Assistant

So, I did get myself the latest image for home assistant and started the configuration.

sudo docker pull homeassistant/home-assistant
sudo docker run -d --network=host --name home-assistant -v /path/to/local/config:/config  homeassistant/home-assistant

Arlo Cameras

To get my Arlo cameras running, I followed the instructions on Hass-Arlo - Github for @sherrell’s custom component. That’s when part of the fun started… As described here, the current docker ffmpeg installation does not support rtsps streams from cameras “out of the box”. So, some digging later, I was able to compile and install ffmpeg for my docker container (also described in the link above).

(I urge everyone with the same setup to keep a copy of the created .deb file and store it into a subfolder within your /config in order to reinstall the package after a HA upgrade)

Tensorflow

Next up, Tensorflow. That should be easy enough, as it is described here. So, I modified my configuration.yaml and rebooted the container. Sadly, the container crashed. At first I thought it was because my NAS only provided 1GB of RAM. So I upgraded to 8GB. But still the container crashed. After some digging it occurred to me that HA’s standard Tensorflow installation was compiled for some hardware acceleration instructions (e.g. AVX) that my processor (a Intel Celeron) does not provide.

A quick

python -c "import tensorflow as tf; print(tf.__version__)"

inside the container showed me that I was right, as it returned Illegal Instruction instead of the version. So I attempted to compile the Tensorflow package right within the HA container as I did before with FFMPEG, as described here.

To safe you the effort: do not.

Apparently, the installed gcc is having some problems with a ASM translation table within Tensorflow’s Eigenvektor implementation.

I learned something that might be helpful to some, though: it is possible to use screen within the HA docker container. As no proper terminal is typically exposed (at least on my docker installation), this requires some hacking… To do so, from a ssh shell into the host system:

user@NAS:~$ sudo docker exec -ti home-assistant /bin/bash

root@NAS:/usr/src/app# apt-get update
root@NAS:/usr/src/app# apt-get install screen
root@NAS:/usr/src/app# apt-get update
root@NAS:/usr/src/app# script /dev/null
Script started, file is /dev/null
# screen -R -s /bin/bash

Now you can open and close the screen as you like, run compiles,… and do not have to worry if it takes some time longer and the ssh connection might break…

But back to the actual Tensorflow issue…
HA uses a relatively old GLIBc version, while at the same time python3.7 is used.

Thus, following the official Tensorflow compilation instructions, I decided to download a premade Tensorflow compilation container and to go ahead with the compilation in there. It is important to use the specified docker image r1.13.1-py3, as otherwise, the GLIBc is too new to use Tensorflow within HA’s docker container.
I then pulled the Tensorflow source code, decided to go with r1.13, installed the correct Bazel build server and python3.7 and then started the compile:

user@NAS:~$ sudo docker pull tensorflow/tensorflow:r1.13.1-py3
user@NAS:~$ sudo docker run -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" --name tensorflow tensorflow/tensorflow:1.13.1-py3 bash

root@585d31dec9fc:/tensorflow# apt-get update
root@585d31dec9fc:/tensorflow# apt-get install git
root@585d31dec9fc:/tensorflow# git clone https://github.com/tensorflow/tensorflow.git

root@585d31dec9fc:/tensorflow# apt-get install software-properties-common
root@585d31dec9fc:/tensorflow# add-apt-repository ppa:deadsnakes/ppa
root@585d31dec9fc:/tensorflow# apt update
root@585d31dec9fc:/tensorflow# apt-get install python3.7-dev python3.7 wget

root@585d31dec9fc:/tensorflow# rm /usr/bin/python3
root@585d31dec9fc:/tensorflow# ln -s python3.7 /usr/bin/python3
root@585d31dec9fc:/tensorflow# cd /usr/bin
root@585d31dec9fc:/tensorflow# ln -s python3.7 python
root@585d31dec9fc:/tensorflow# cd /tensorflow
root@585d31dec9fc:/tensorflow# python3.7 -m pip install pip
root@585d31dec9fc:/tensorflow# python3.7 -m pip install --upgrade pip
root@585d31dec9fc:/tensorflow# python3.7 -m pip install pip six numpy wheel setuptools mock future>=0.17.1
root@585d31dec9fc:/tensorflow# python3.7 -m pip install keras_applications==1.0.6 --no-deps
root@585d31dec9fc:/tensorflow# python3.7 -m pip install keras_preprocessing==1.0.5 --no-deps

root@585d31dec9fc:/tensorflow# apt-get install pkg-config zip g++ zlib1g-dev unzip

root@585d31dec9fc:/tensorflow# wget https://github.com/bazelbuild/bazel/releases/download/0.21.0/bazel-0.21.0-installer-linux-x86_64.sh
root@585d31dec9fc:/tensorflow# chmod +x bazel-0.21.0-installer-linux-x86_64.sh
root@585d31dec9fc:/tensorflow# ./bazel-0.21.0-installer-linux-x86_64.sh

root@585d31dec9fc:/tensorflow# cd tensorflow
root@585d31dec9fc:/tensorflow/tensorflow# git checkout r1.13
root@585d31dec9fc:/tensorflow/tensorflow# ./configure #just go with the standard settings
root@585d31dec9fc:/tensorflow/tensorflow# bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

## The compilation takes a lot of time (e.g. 24h on my machine). 
## One can CTRL-P CTRL-Q out of the container and
##    user@NAS:~$ sudo docker attach tensorflow
## back in

root@585d31dec9fc:/tensorflow/tensorflow# ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt

root@585d31dec9fc:/tensorflow/tensorflow# exit

Now we do have a working tensorflow-1.13.2-cp37-cp37m-linux_x86_64.whl file in the local directory of the host system, that we can copy into the config directory of the HA container and install and test it:

root@NAS:/usr/src/app# pip install /PATH/TO/FILE/tensorflow-1.13.2-cp37-cp37m-linux_x86_64.whl
root@NAS:/usr/src/app# python -c "import tensorflow as tf; print(tf.__version__)"
1.13.2
root@NAS:/usr/src/app# 

Starting the HA container with enabled tensorflow image_processing should now work.

Here is my example entry in configuration.yaml:

homeassistant:
#[...]
  whitelist_external_dirs:
    - /config/images/tensorflow
    - /config/images/cameras
    - /config/www/snapshots

#[...]
camera:
-  platform: aarlo
   ffmpeg_arguments: '-pred 1 -q:v 2'
-  platform: local_file
   name: "snapshot_front"
   file_path: /config/images/cameras/front_latest.jpg
-  platform: local_file
   name: "snapshot_garden"
   file_path: /config/images/cameras/garden_latest.jpg
-  platform: local_file
   name: "snapshot_garage"
   file_path: /config/images/cameras/garage_latest.jpg
-  platform: local_file
   name: "snapshot_gardenentrance"
   file_path: /config/images/cameras/gardenentrance_latest.jpg

image_processing:
- platform: tensorflow
  scan_interval: 10000
  source:
        - entity_id: camera.snapshot_front
        - entity_id: camera.snapshot_garage
        - entity_id: camera.snapshot_garden
        - entity_id: camera.snapshot_gardenentrance
  model:
        graph: /config/tensorflow/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
  file_out:
        - "/config/images/tensorflow/{{ camera_entity.split('.')[1] }}_latest.jpg"
        - "/config/www/snapshots/{{ camera_entity.split('.')[1] }}_latest.jpg"
        - "/config/images/tensorflow/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"

binary_sensor:
  - platform: aarlo
    monitored_conditions:
    - motion
    - sound
    - ding

stream:

ffmpeg:

As you can see, I decided to use a local_file camera for now. This might give me some more control of the generated snapshots, but I might change that sometime later.

Also, I am using the ssd_mobilenet_v2_coco_2018_03_29 model from tensorflow zoo for now.

Scripting Tensorflow
In order to properly react to incoming motion alerts, I decided to implement the following scripts:

analyse_camerareportedmovement.yaml
handle_analytics_result_camerareportedmovement.yaml
notify_about_camerareportedmovement
fire_rerun_camerareportedmovement.yaml

analyse_camerareportedmovement:
    sequence:
####
# Get image from camera
###
      - data_template:
          entity_id: "{{camera_entity}}"
          filename: "/config/images/cameras/{{camera_entity | replace ('camera.aarlo_','') + '_latest.jpg'}}"
        service:
          camera.aarlo_request_snapshot_to_file
###
# Wait until screenshot is done
###
      - wait_template: "{{ not is_state(camera_entity, 'taking snapshot') }}"
        timeout: '00:00:10'
        continue_on_timeout: 'true' 
###
# Attempt to update local file
###
      - data_template:
          entity_id: "{{camera_entity | replace ('camera.aarlo_','camera.snapshot_')}}"
          file_path: "/config/images/cameras/{{camera_entity | replace ('camera.aarlo_','') + '_latest.jpg'}}"
        service:
          camera.local_file_update_file_path
###
# Run Image recognition
###
      - data_template:
          entity_id: "{{camera_entity | replace ('camera.aarlo_','image_processing.tensorflow_snapshot_')}}"
        service: image_processing.scan
###
# Deal with results
###
      - data_template:
          camera_entity: "{{camera_entity}}"
          rerun: >
            {% if rerun is defined %}
            {{ rerun | int + 1 }}
            {% else %}
            1
            {% endif %}
        service: script.handle_analytics_result_camerareportedmovement
handle_analytics_result_camerareportedmovement:    
     sequence:
      - condition: or
        conditions:
           - condition: template
             value_template: "{{ (is_state ((camera_entity |replace ('camera.aarlo_','binary_sensor.aarlo_motion_')), 'on') and ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) < 1))}}"
           - condition: template
             value_template: "{{ ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) > 0) }}"
      - data_template:
          camera_entity: >
           {% if (is_state ((camera_entity |replace ('camera.aarlo_','binary_sensor.aarlo_motion_')), 'on') and ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) < 1)) %}
           {{ camera_entity }}
           {% elif ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) > 0) %}
           {{camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_')}}
           {% endif %}
          rerun: "{{rerun}}"
        service_template: >
           {%- if (is_state ((camera_entity |replace ('camera.aarlo_','binary_sensor.aarlo_motion_')), 'on') and ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) < 1)) -%}
           script.fire_rerun_camerareportedmovement
           {%- elif ((state_attr(camera_entity |replace ('camera.aarlo_','image_processing.tensorflow_snapshot_'),'total_matches')|int) > 0) -%}
           script.notify_about_camerareportedmovement
           {%- endif -%}
notify_about_camerareportedmovement:
    sequence:
      - data_template:
          message: "Camera {{ camera_entity | replace ('image_processing.tensorflow_snapshot_','') }} reported {{ state_attr(camera_entity,'matches').keys()|list|unique|list|join|title }}"
          data:
             attachment:
               content-type: jpeg
               url: "https://<INTERNET ADRESS OF HA INSTANCE>/local/snapshots/{{ camera_entity | replace ('image_processing.tensorflow_snapshot_','snapshot_') }}_latest.jpg"       
             entity_id: "{{ camera_entity | replace ('image_processing.tensorflow_snapshot_','camera.aarlo_') }}"
        service: notify.all_ios
fire_rerun_camerareportedmovement:
    sequence:
      - event: event_rerun_analytics_camerareportedmovement
        event_data_template:
          camera_entity: "{{camera_entity}}"
          rerun: "{{rerun}}"

In addition, two automations are required in automations.yaml.

- id: '1220000000000'
  alias: cameramovement
  trigger:
  - entity_id: binary_sensor.aarlo_motion_front
    from: 'off'
    platform: state
    to: 'on'
  - entity_id: binary_sensor.aarlo_motion_garden
    from: 'off'
    platform: state
    to: 'on'
  - entity_id: binary_sensor.aarlo_motion_gardenentrance
    from: 'off'
    platform: state
    to: 'on'
  - entity_id: binary_sensor.aarlo_motion_garage
    from: 'off'
    platform: state
    to: 'on'
  condition: []
  action:
  - data_template:
      camera_entity: "{{ trigger.entity_id | replace ('binary_sensor.aarlo_motion_','camera.aarlo_') }}"
    service: script.analyse_camerareportedmovement
- id: '1220000000001'
  alias: event_handler_rerun_analytics_camerareportedmovement
  trigger:
    - platform: event
      event_type: event_rerun_analytics_camerareportedmovement
  condition: []
  action:
    - data_template:
        camera_entity: "{{ trigger.event.data.camera_entity}}"
        rerun: "{{ trigger.event.data.rerun}}"
      service: script.analyse_camerareportedmovement

With this setup, I can take multiple snapshots (only in case tensorflow did not recognize anything) and rerun the detection.

In case, something is detected, I do get a push message to all my iOS devices (that are in the notification group notify.all_ios.

Runtime statistics
I have this setup running a couple days now and am so far relatively happy. There are still some issues with the snapshots and but I feel that this will get better over time (or I look into it :slight_smile:).

On average, tensorflow runs for 1-3s per image. This is rather fast and is totally enough for my requirements (the video is anyway stored in the cloud for 7days -for free- and additionally on a local USB stick).

– I will update this post as I progress, I hope this is somewhat helpful to others that struggle with the same idea :beers:

6 Likes

Thank you for your post. Can you tell me how to know the image_ Processing.scan service is done like the camera snapshot service