Tensorflow step-by-step guide

Deam. I dont know what I did Wrong…
used the gist to install the first part - but still getting

Error while setting up platform tensorflow
Traceback (most recent call last):
File “/srv/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity_platform.py”, line 128, in _async_setup_platform
SLOW_SETUP_MAX_WAIT, loop=hass.loop)
File “/usr/lib/python3.6/asyncio/tasks.py”, line 358, in wait_for
return fut.result()
File “/usr/lib/python3.6/concurrent/futures/thread.py”, line 56, in run
result = self.fn(*self.args, **self.kwargs)
File “/srv/homeassistant/lib/python3.6/site-packages/homeassistant/components/image_processing/tensorflow.py”, line 124, in setup_platform
detection_graph = tf.Graph()
AttributeError: module ‘tensorflow’ has no attribute ‘Graph’

needed to run the pip3 install opencv-python==3.2.0.6 inside the venv forgetting rid of the No OpenCV library found.error…

Some more infos. could be one of the following
I runed the script as root outside the venv.
its a ubuntu 18.04 2011 macbook tryed to get an hassbian like install.

AAAH - The Wrong version was the Problem - When youre there too
go in the venv and run $ pip3 install tensorflow==1.11.0 skalavala saved my ass as so often :smiley:

I am trying to install tensorflow on NUC(nuc7pjyh) running on Ubuntu 18.04, but got stuck on “The TensorFlow library was compiled to use AVX instructions, but these aren’t available on your machine.”

I used @cooloo’s method and installed tensorflow==1.5.0 and it worked. Thank you

Actually when looking further this is the reason (from the tensorflow page)

  • Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs.

I have been having an issue where it will be running fine, then just say it cant get image… The disk is not full, the cpu is super powerful, and there is plenty of memory. I get no other errors…

Anyone else?

Is there a limit to the number of times you can call tensorflow?

There is no artificial limit, must be an error somewhere

I’m running in a pythong venv on an older I3 with 4gb ram on an SSD. I followed the guide and feel like everything installed fine. No major issues. I now have an entity image_processing.tensorflow_front_door which is pointing to my front door camera. I dont get any errors, but when I manually initiate a scan using the services menu, nothing happens. The entity timestamp does not change, no image, nothing. Any thoughts on where to start troubleshooting?

2019-02-16

my config

image_processing:
 - platform: tensorflow
   scan_interval: 20000
   source:
     - entity_id: camera.front_door
   model:
     graph: /home/homeassistant/.homeassistant/tensorflow/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb

OK, might just be there was not much in my camera picture at the moment. I added another camera and did get back objects. But no image for me to see what it thinks it sees. How do you get that?

Edit:

OK, so I added two cameras. They are different model cameras. but both show up in HA just fine. One of them just does not get processed by tensorflow. No errors, nothing happens.

I updated my config

image_processing:
 - platform: tensorflow
   scan_interval: 20000
   source:
     - entity_id: camera.front_door
     - entity_id: camera.back_gate
   file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
      - "/tmp/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
   model:
     graph: /home/homeassistant/.homeassistant/tensorflow/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb

When I process a scan for the front door, I get nothing in /tmp, no processing. The other camera does process and puts an image there.

What am I missing? Is there an image/camera requirement?

Here is my camera

- platform: generic
  still_image_url: http://192.168.1.xxx/ISAPI/Streaming/channels/101/picture
  name: Front Door
  username: admin
  password: !secret camera_doorbell_password
  authentication: basic

Ok, I’m sure I have a weird typo or am missing something very simple. Just to rule out my front door camera, I created a local file camera using a snapshot of my front door. It also fails to process, nothing happens.

I guess I have a question about the order of processing. If the image is analyzed and nothing is there, will it still output a picutre, but with no boxes?

Edit: Nevermind, It was that there really was nothing to process. It’s cold out so I did not want to go out and stand in front of the camera, but I did, and it did process a person.

So… If there is nothing to process, and snapshot will not be taken

Edit2: more questions

Couple of quick questions about tensorflow and configuration.

So, if I have multiple cameras but I want to exclude an area from one of the cameras and not the other how can I do that. Here is my configuration

image_processing:
 - platform: tensorflow
   scan_interval: 20000
   source:
     - entity_id: camera.front_door
     - entity_id: camera.back_gate
   file_out:
      - "/home/homeassistant/.homeassistant/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
      - "/home/homeassistant/.homeassistant/www/tmp/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
   model:
     graph: /home/homeassistant/.homeassistant/tensorflow/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
     area:
       # Exclude top 13% of image
       top: 0.13
       # Exclude left 15% of image
       left: 0.30
       # Exclude right 15% of image
       right: 0.10

I really just want to exclude the area’s from one camera.

Also I was curious about the categories, is there a list somewhere. Other than people, car, truck

edit3:

I’ve been using this for a couple weeks now and have really enjoyed it. For me the most useful for better alerts for my camera. I used to get a good amount of “false” alerts with motion, or “alerts” for things I did not care about. Now, I get alerts for things that I actually want alerts for, and it’s been nearly flawless. I trigger the scan based on motion, but I trigger my alerts based on the results. here are a few sensors I’ve created with some help

     objects_in_driveway:
       friendly_name: Objects in driveway
       value_template: "{{ states('image_processing.tensorflow_front_door') | float >= 1 }}"
       entity_id: image_processing.tensorflow_front_door

     person_in_driveway:
       friendly_name: Person in Drivway
       value_template: >
         {% set m = state_attr('image_processing.tensorflow_front_door', 'matches') %}
         {{ m.person is defined and (m.person[0].score) | float >= 80 }}
       entity_id: image_processing.tensorflow_front_door

     vehicle_in_driveway:
       friendly_name: Vehicle in Driveway
       value_template: >
         {% set m = state_attr('image_processing.tensorflow_front_door', 'matches') %}
         {{ m.truck is defined and (m.truck[0].score) | float >= 80 or m.car is defined and (m.car[0].score) | float >= 80 or m.bike is defined and (m.bike[0].score) | float >= 80 }}
       entity_id: image_processing.tensorflow_front_door
5 Likes

OK, I got this going but it keeps saying it sees a person (when there is none)

it shows…

{'person': [{'box': [0.3856241703033447, 0.47053608298301697, 0.9265235662460327, 0.9915212392807007], 'score': 85.14859676361084}]}

any way to know where in the picture it is seeing this, so I know where to limit it? From what I understand this is saying it is 85% sure there is a person in the picture.

Never mind I got the file_out: working and it shows the pic.

Hi Jason - what was your final code for this - look really good !

eg the sensors and automations etc

I dont have much set up… Follow the guide to get a complete setup

image_processing:
 - platform: tensorflow
   scan_interval: 1
   source:
     - entity_id: camera.FrontDoorCam
   model:
     graph: /home/homeassistant/c9workspace/homeassistant/tensorflow/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb      
     categories:
       - category: person 
         area:
           right: 0.85  
   file_out:
     - "/home/homeassistant/.homeassistant/tensorflow/{{ camera_entity.split('.')[1] }}_latest.jpg"
     - "/home/homeassistant/.homeassistant/tensorflow/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
   

and automation to send pic when it see someone.

  - alias: "SendDoorBellCameraPic"
    initial_state: True
    trigger:
      platform: state
      entity_id: image_processing.tensorflow_frontdoorcam
      from: '0'
    action:
      - service: notify.xxxxxxxxha_bot
        data:
          title: Send an images
          message: "Someone is at the door!"
          data:
            photo:
              - file: '/home/homeassistant/.homeassistant/tensorflow/frontdoorcam_latest.jpg'

So far it works great.

2 Likes

Anyone have any suggestions with which model you are using? I feel like it’s often missing people using ssd_mobilenet_v2_coco. I’ve lowered the confidence to 10% but it often misses people but my motion snapshot is triggering so it is definitely triggering the image processing. Also do I need to reboot after changing the frozen_inference_graph.pb file? Or will it use the new one on the next image scan? Any suggestions would be appreciated

I think a restart of HA but not necessarily a reboot is required

So I will be migrating my hassio setup to a NUC. Will there be some way of running on an NUC? I will have the processing power but seems no good way of running tensorflow with hassio.

Just me, everyone has their own thoughts on the best install. I have an older computer equivalent to a NUC. I’m just running straight up Ubuntu server, no UI, and HA in a Python VENV. Tensorflow works great following the instructions in this thread. Honestly everything works great, you just lose out on the plugin capabilities.

2 Likes

Currently running the latest (0.91.0) version of HA and think I have successfully integrated the TensorFlow component. I have 2 cameras, each with built-in motion that I have mapped to a HA binary_sensor. What I would like to happen is the following:

  1. camera detects motion, corresponding binary_sensor turns ‘on’
  2. automation triggers from #1 and gets TensorFlow to analyze image for person
  3. if person is detected, save image and send iOS notification.

I must be doing something wrong… would appreciate another set of eyes:

- platform: tensorflow
  scan_interval: 10000
  source:
    - entity_id: camera.driveway
    - entity_id: camera.frontdoor
  file_out:
    - "/opt/homeassistant/config/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
  model:
    graph: /opt/homeassistant/config/tensorflow/frozen_inference_graph.pb
    categories:
      - category: person
- alias: "camera motion on driveway"
  trigger:
    platform: state
    entity_id: binary_sensor.dahua_driveway
    to: 'on'
  action:
    service: image_processing.scan
    entity_id: image_processing.tensorflow_driveway

- alias: "camera motion on frontdoor"
  trigger:
    platform: state
    entity_id: binary_sensor.dahua_frontdoor
    to: 'on'
  action:
    service: image_processing.scan
    entity_id: image_processing.tensorflow_frontdoor
- alias: "tensorflow driveway"
  initial_state: True
  trigger:
    platform: state
    entity_id: image_processing.tensorflow_driveway
    from: '0'
  action:
    service: notify.ios_iPhone
    data:
      title: "Tensorflow"
      message: "driveway"
      data:
        attachment:
          content-type: jpeg
          url: "https://MYURL.ui.nabu.casa/local/tmp/driveway_latest.jpg"  

- alias: "tensorflow frontdoor"
  initial_state: True
  trigger:
    platform: state
    entity_id: image_processing.tensorflow_frontdoor
    from: '0'
  action:
    service: notify.ios_iPhone
    data:
      title: "Tensorflow"
      message: "frontdoor"
      data:
        attachment:
          content-type: jpeg
          url: "https://MYURL.ui.nabu.casa/local/tmp/frontdoor_latest.jpg"  
1 Like

You don’t say what problem you have with your configuration.
But in any case, change service: notify.ios_iPhone to service: notify.ios_iphone.

Is there a way to set the threshold for detection? I’d like to set it somewhere between 90-95%. Right now I regularly get notified that a stump or a plant is a person and it’s in the ~85% range.

I use templates to create sensors for the things I want to be notified about and set the threshold there.

Unfortunately, I don’t think that will work for me. I have Tensorflow querying my four cameras directly every five seconds and if something is detected, a new image file is created. Node-RED detects the new image and sends it to my family via Pushbullet.

The problem I’ve run into going the route you’re proposing is keeping the images in sync with the detection. As calls to Tensorflow from Node-RED are asynchronous and Node-RED runs flows in parallel, there’s no way to tie sensor results to specific images.

Trying to leverage motion detection is not a viable option from my cameras as during storms or heavy winds they detect motion continually. FFMPEG motion detection has the same issue and is very expensive from a CPU perspective.

I found it’s actually cheaper from a CPU perspective to just run Tensorflow every 5 seconds and let it make the determination irrespective of motion.