New Custom Compontent - Image Processing - Object Detection - DOODS

Yeah it is the same issue as in GitHub, I reopened that ticket

tested with 2 usb 2.0 hubs, one with dual usb heads, and one with a single - same problem im getting, well see if i can power the dual headers from external power (ipad charger) and see if it make any change

I plugged mine into a Raspberry Pi 4 into a USB 3.0 slot and it’s been going continuously for 9 hours now (about 15 detections a second) without hanging… Did you use frigate or something else with issues?

hmm, maybe i should setup my pi4 for only this. only experience few hangs after many days on frigate - but else run fine with 5 cameras checking with 1 FPS. - this is odd.

just a smale update, tried with usb hub 2.0 external power and extended usb 3.0 cable, both ending up doing the same.

im thinking of taking away the docker container from my VM host, and run it directly on the host OS instead for see if that make any different.

but i might first take a look at that in the new week, having an CCNP i need to focus on :frowning:

A follow-up to my previous post. I’ve now got everything working as I wanted and would like to share my config with others.

My goal was to emulate BlueIris functionality with HA only. I have 5 outdoor cameras (Dahua IPC-HDW5231R-Z). These have built-in motion detection capabilities that I’m leveraging. Here’s what happens in words:

  1. Camera detects motion; notifies HA via MQTT
  2. HA verifies no previous event from camera in the past 60s (reduce duplicates), the does the following:
    2.1 Sets a date_time object with the current date/time.
    2.2 Calls camera.record to record a 20s video clip and save as cameraname_temp.mp4.
    2.3 Calls DOODs to do image processing to see if person, car or truck in frame
  3. If DOODs detects one of the above items in the image:
    3.1 Send me a notification with analyzed picture: notify.ios.
    3.2 Wait for 25s (to ensure camera.record is complete)
    3.3 Call a shell command to rename cameraname_temp.mp4 to the date/time from date_time object for permanent storage (e.g. driveway_20191117_2032.mp4).

All of this is found in the config snippets below. My next goal is to create some sort of Lovelace interface to review the stored videos and manage them (play, delete).

configuration.yaml

shell_command: 
  motion_video: "/opt/homeassistant/config/automation/motion_video.sh {{action}} {{camera}} {{time}}"

motion_video.sh

#!/bin/bash

if [ $1 = 'mv' ]
then
  mv /mountpoint/Homeassistant/$2_temp.mp4 /mountpoint/Homeassistant/$2_$3.mp4
else
  if [ $1 = 'rm' ]
  then
    rm /mountpoint/Homeassistant/$2_temp.mp4
  fi
fi

image_processing.yaml

- platform: doods
  scan_interval: 10000
  url: "http://DOODSIP:8080"
  detector: tensorflow
  file_out:
    - "/opt/homeassistant/config/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/mountpoint/Homeassistant/{{ camera_entity.split('.')[1] }}_{{ state_attr('input_datetime.lastmotion_'~camera_entity.split('.')[1], 'timestamp') | timestamp_custom('%Y%m%d_%H%M') }}.jpg" 
  source:
    - entity_id: camera.driveway
  confidence: 70
  labels:
    - name: person
    - name: car
    - name: truck

automation_imageprocessing.yaml

- alias: "camera motion on driveway"
  trigger:
    platform: state
    entity_id: binary_sensor.dahua_driveway
    to: 'on'
  condition:
    - condition: template
      value_template: "{{ as_timestamp(now()) - as_timestamp(states.automation.camera_motion_on_driveway.attributes.last_triggered) | int > 60 }}"
  action:
    - service: input_datetime.set_datetime
      entity_id: input_datetime.lastmotion_driveway
      data_template:
        datetime: "{{ now().strftime('%Y-%m-%d %H:%M:%S') }}"
    - service: camera.record
      data:
        entity_id: camera.driveway
        filename: "/mountpoint/Homeassistant/driveway_temp.mp4"
        duration: 20
    - service: image_processing.scan
      entity_id: image_processing.doods_driveway

- alias: "tensorflow driveway"
  trigger:
    platform: state
    entity_id: image_processing.doods_driveway
  condition:
    condition: or
    conditions:
      - condition: template
        value_template: "{{ 'car' in state_attr('image_processing.doods_driveway', 'summary') }}"
      - condition: template
        value_template: "{{ 'truck' in state_attr('image_processing.doods_driveway', 'summary') }}"
      - condition: template
        value_template: "{{ 'person' in state_attr('image_processing.doods_driveway', 'summary') }}"
  action:
    - service: notify.ios_MYIOSPLATFORM
      data:
        title: "Tensorflow"
        message: "driveway"
        data:
          attachment:
            content-type: jpeg
            url: "https://MYNABUCASATOKEN.ui.nabu.casa/local/tmp/driveway_latest.jpg"  
    - delay: "00:00:25"
    - service: shell_command.motion_video
      data_template:
        action: "mv"
        camera: "driveway"
        time: "{{ state_attr('input_datetime.lastmotion_driveway', 'timestamp') | timestamp_custom('%Y%m%d_%H%M') }}"
6 Likes

Good luck @nic0dk I never finished my CCNP. Wish I had sometimes. I am still feeling like this might be a power issue. It has been pretty solid on my Raspberry Pi 4 thus far. Apparently this thing can draw right up to the limit.

1 Like

That is a really awesome setup. I might do something similar now. I would love to offload the motion detection to the cameras. BlueIris does an okay job but it kills my server most of the time.

really nice - i have the motion sensor in hikvision camera’s and could use them for recording, the only problem i see with recording from the camera, is it take times to start the rtsp feed, so you miss some seconds there, i really like your approach - differently gonna copy that into my setup

I have had this running for the last few days, seems to work well on my Pi 4. I have been tying it into my motion sensors to call the image processing scan. Haven’t noticed any real hit to CPU or delay in image processing time.

It occasionally picks up a false positive with a confidence in the high 40s. Is there a different recommended model for a bit better accuracy with fairly quick processing time?

Think I might try out just letting it scan every 20 or 30 seconds to see if there is a performance hit.

So I have been running this now continuously 5-15 detections per second on my Raspberry Pi 4 for the past 2 days with no hangs. I am really thinking the guys that are having crashes, it’s related to power. I did update doods so it should fail and quit if it hangs (and then restart if you tell docker to do so)

Basically you trade accuracy for CPU usage and time. I find the cnn inception models to be the most accurate but they take a really long time and tons of CPU.

Do you have a happy medium that you could recommend? Or one a bit more accurate than the default?

I don’t really… This is the model zoo that has a lot of pre-trained models. https://github.com/tensorflow/models/blob/3635527dc66cdfe7270e5b3086858db7307df8a3/research/object_detection/g3doc/detection_model_zoo.md

Most of these are for tensorflow. The default model is for tensorflow lite. You could try one of these and see if it works better for you.

Sorry guys, what’s the problem here?
This is the model I made in tensorflow in anaconda env.

2019-11-23T23:48:51.037+0300	INFO	server/server.go:137	HTTP Request	{"status": 200, "took": 0.234897053, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "d5f40609-doods/eKonWmskr6-000001", "remote": "192.168.1.29:51922"}
2019-11-23 23:49:16.207353: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 6220800 exceeds 10% of system memory.
panic: nil-Operation. If the Output was created with a Scope object, see Scope.Err() for details.

goroutine 58 [running]:
github.com/tensorflow/tensorflow/tensorflow/go.Output.c(...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/operation.go:130
github.com/tensorflow/tensorflow/tensorflow/go.newCRunArgs(0x40057333d8, 0x4005733478, 0x4, 0x4, 0x0, 0x0, 0x0, 0x4005733670)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:369 +0x4d0
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0x40001b4020, 0x40057333d8, 0x4005733478, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:143 +0x1c4
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x40001f4500, 0xd102e0, 0x4005726d50, 0x400571a340, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:187 +0x8b4
github.com/snowzach/doods/detector.(*Mux).Detect(0x40001b4880, 0xd102e0, 0x4005726d50, 0x400571a340, 0x40001b4880, 0xb3af20, 0x400571a301)
	/build/detector/detector.go:120 +0xb0
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0xd102e0, 0x4005726d50, 0xbb0980, 0x400571a340, 0x13, 0xd102e0, 0x4005726d50, 0x0)
	/build/odrpc/rpc.pb.go:855 +0x7c
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0xd102e0, 0x4005726d50, 0xbb0980, 0x400571a340, 0x400571e4c0, 0x400571e4e0, 0xb5c1a0, 0x400571e500, 0x4005657a00, 0x400571e4c0)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd8
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0xd102e0, 0x4005726d50, 0xbb0980, 0x400571a340, 0x40001be500, 0x0, 0x40056579d8, 0x9901c4)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x58
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0xd102e0, 0x4005726d50, 0xbb0980, 0x400571a340, 0x400571e4c0, 0x400571e4e0, 0xb72e40, 0x4005726d50, 0x60, 0xb8d5c0)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0xb3af20, 0x40001b4880, 0xd102e0, 0x4005726d50, 0x400574e1e0, 0x4000125800, 0xd102e0, 0x4005726d50, 0x40062aa000, 0xf2c9)
	/build/odrpc/rpc.pb.go:857 +0x128
google.golang.org/grpc.(*Server).processUnaryRPC(0x4005644300, 0xd17600, 0x40001ee120, 0x40001be500, 0x40001ae780, 0x11ea4d8, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1007 +0x380
google.golang.org/grpc.(*Server).handleStream(0x4005644300, 0xd17600, 0x40001ee120, 0x40001be500, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1287 +0xb0c
google.golang.org/grpc.(*Server).serveStreams.func1.1(0x400012e280, 0x4005644300, 0xd17600, 0x40001ee120, 0x40001be500)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:722 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:720 +0x8c

This appears to be a problem with your model. See here: go - Tensorflow on Golang Model sessionn run error : nil-Operation. If the Output was created with a Scope object, see Scope.Err() for details - Stack Overflow

1 Like

I’m confused on how to use the Hassio component, or how it works. I have my camera set up with RTSP stream and have set up Doods config as below (after installing hassio component).

camera:
  - platform: generic
    name: 3D Printer
    still_image_url: https://assets.pcmag.com/media/images/501867-wyze-cam-v2.jpg
    stream_source: rtsp://user:[email protected]/live

image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://localhost:8080"
    detector: default
    file_out:
      - "/config/www/doods/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.3d_printer
    confidence: 10
    labels:
      - name: person
        confidence: 10

Is the scan_interval in milliseconds? I imagine it checking the RTSP feed every 1 second to performing a scan on that image and saving the output.

Is this how it works, or do you have to set up automation and call the image_processing service?

Because I can’t seem to get any triggers on my state.

Cheers for the help.

I’m running Hassio on Docker.

1 Like

1000 = 1000 seconds

No need call service. It check image and store to locations specified based on scan_interval

Hi all, this project is awesome, but having trouble getting it working. Apologies if the solution is obvious but thought I would lay it out – new to image recognition and sounds like I have a similar setup to @CyrisXD and might be helpful for people in the future.

Setup: Using a Wyze Pan running Dafang Hack (RTSP), works perfectly streaming on Hassio (0.102.2) and in Lovelace

  - platform: generic
    name: Basement
    username: [xxx]
    password: [xxx]
    authentication: basic
    still_image_url: https://192.168.1.11/cgi-bin/currentpic.cgi
    stream_source: rtsp://192.168.1.11:8554/unicast
    verify_ssl: false
    scan_interval: 5
  1. Installed DOODS Hassio add-on
  2. Copied DOODS Hassio custom component into ‘custom_components/doods’
  • Note: My Hassio installation is at 192.168.1.10
image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://192.168.1.10:8080"
    detector: default
    file_out:
      - "/config/www/img_proc/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.basement
    confidence: 50
    labels:
      - name: person
        confidence: 40
      - car
      - truck

I see the DOODS entity in HA, however, it never shows any match or activity. There have been no files created in the specified folder either.

  • I noticed in the configuration there are entries for model files in the ‘opt’ directory. Am I supposed to download and manually install these from somewhere?

Also, below are examples of the entries I am getting in the logs:

2019-11-28T10:30:59.706-0700 INFO tflite/detector.go:326 Detection Complete {“package”: “detector.tflite”, “name”: “default”, “id”: “”, “duration”: 0.347518364, “detections”: 0, “device”: null}

2019-11-28T10:30:59.707-0700 INFO server/server.go:138 HTTP Request {“status”: 200, “took”: 0.386121869, “request”: “/detect”, “method”: “POST”, “package”: “server.request”, “request-id”: “d5f40609-doods/dUNpejHk30-000064”, “remote”: “192.168.1.10:44086”}

Any help greatly appreciated

@snowzach mine keeps seeming to miss the subjects some how in the image…

example:
front_door_20191130_081210
front_door_20191129_153204

Current config:

 - platform: doods
   scan_interval: 10000
   url: "http://192.168.1.207:8100"
   detector: default
   file_out:
    - "/home/homeassistant/.homeassistant/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/home/homeassistant/.homeassistant/www/tmp/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
   source:
    - entity_id: camera.front_door
   confidence: 50
   labels:
    - name: person
      confidence: 30
      area:
        top: 0.1
        right: 0.9
    - name: car
    - name: truck
    - name: bicycle
    - name: backpack
    - name: dog
    - name: cat

Any ideas?