New Custom Compontent - Image Processing - Object Detection - DOODS

DOODS - Dedicated Outside Object Detection Service

I’ve created a service called DOODS that allows you to upload images and do object detection on them. This was born out of my frustration in getting tensorflow running on my various devices along with Home Assistant.

You can stand up a DOODS instance using Docker and remotely send it images to process and it will send you back the detected objects in the image. It has support currently for tensorflow and tensorflow lite images including support for the Coral EdgeTPU hardware accelerator.

I have also written a hass component for it that is configured pretty closely to the tensorflow component and the output result is exactly the same.

Assuming people like it, I’d love for this to get included in the main distribution. This might be a solution for the hass.io people to pull the massive tensorflow component out of the docker image.

The docker hub for the server is here: https://hub.docker.com/r/snowzach/doods
The source for the server is here: https://github.com/snowzach/doods
The hass custom component is here: https://github.com/snowzach/hassdoods
NEW!: Hass.io Repo: https://github.com/snowzach/hassio-addons.git

You can start the server with:

docker run -it -p 8080:8080 snowzach/doods:latest

Right now it only supports 64bit x86. I am working on an arm/arm64 image eventually.

It includes the basic mobilenet model by default and will work out of the box. There is also a fetch_models.sh script with the server (https://raw.githubusercontent.com/snowzach/doods/master/fetch_models.sh) file in the source repo that will download a handful of models including support for the edgetpu and spit out an example.yaml configuration file.

You can run that with:

docker run -it -v $PWD/models:/opt/doods/models -v $PWD/example.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest

You can then install the hass component in your config directory under custom_components/doods and an example config file looks similar to tensorflow with:

image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://<my docker host>:8080"
    detector: tensorflow
    file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.front_yard
    confidence: 50
    labels:
      - name: person
        confidence: 40
        area:
          # Exclude top 10% of image
          top: 0.1
          # Exclude right 15% of image
          right: 0.85
      - car
      - truck

Update June 2020
You can now clone the doods server and there is a base Dockerfile in there that will optimize for whatever CPU you happen to be running on.
It also includes the inception model by default using the detector called tensorflow

28 Likes

Thanks, will give a try

Cool!

What is the advantage of this compared to Deepstack component by @robmarkcole?

It looks pretty similar. The back end of DOODS supports tensorflow and tensorflow light models so it’s a little more customizable I think. I was also going to add darknet as well eventually. You could in theory train your own models. It should also be capable of running on a pi when I sort out the docker build process for arm for the back end.

2 Likes

thanks for this. super easy i love it. Quick question though, i get an error when using the example config about ‘labels’, ive also tried using ‘catagories’ like the tensorflow config but it gives me an error as well. If i leave the labels config off it works it is just calling out everything…umbrella, box, person…below is the error and config.

2019-08-11 13:11:01 ERROR (MainThread) [homeassistant.config] Invalid config for [image_processing.doods]: [label] is an invalid option for [image_processing.doods]. Check: image_processing.doods->labels->0->label. (See /config/configuration.yaml, line 147). Please check the docs at https://home-assistant.io/components/image_processing.doods/

image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://127.0.0.1:8080"
    detector: default
    file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.front_yard
    confidence: 50
    labels:
      - label: person
        confidence: 40
        area:
          # Exclude top 10% of image
          top: 0.1
          # Exclude right 15% of image
          right: 0.85
      - car
      - truck

this works fine but no filter:

image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://127.0.0.1:8080"
    detector: default
    file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.driveway
    confidence: 30


im running homeassistant on an ubuntu vm...beefy server

Hi @Darbos, thanks for trying it out. The key under labels is name instead of label Sorry about that, I updated the original post and the documentation on github.

Ah, I thought I tried that too. Perfect man, thanks for the help its workin!!!

I’ve got an NVIDIA Jetson TX2 that I wanted to put to some use around the house. I run HassIO on my server, so I was thinking of a similar solution. What would I need to do to adapt this project to the Jetson? Tensorflow is already available, and am installing that now. I’m assuming that I’d need to recompile the server code to run it on the bare hardware rather than in a docker container…

Anything else?

Hey @mbardeen, DOODS is really designed to run in a Docker container. You might be able to get it to run locally if you have the C Tensorflow library. (look for a libtensorflow_c.so file) You would also need to install Tensorflow Lite. (or disable it in DOODS by commenting it out in detector/detector.go) It might work if you do that.

The other angle you can take, albeit harder, is install Docker on it and try adding the Jetson dependencies to the Dockerfile and see if it will build inside the container. I plan on starting an aarch64 build here in the next day or so. That would be an even better starting point.

Good luck!

Any plan to add it as a hass.io integration or HACS ?

Hi,

I love this project! Have been looking for something like this to allow me to execute the image processing locally but seperatly from my actual home assistant setup.

I installed everything but the entities created show 0 for everything even after scans.


  - platform: doods
    scan_interval: 10000
    url: "http://192.168.1.15:8080"
    detector: default
    file_out:
      - "/home/homeassistant/.homeassistant/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.driveway
      - entity_id: camera.frontdoor
    confidence: 50
    labels:
      - name: person
        confidence: 40
      - car

I see the following logs in the docker container logs. So i think it is scanning correctly but not reflecting in HA:

2019-08-24T23:18:53.583Z INFO server/server.go:88 API Request {“status”: 200, “took”: 1.440237517, “remote”: “192.168.1.183:49862”, “request”: “/detect”, “method”: “POST”, “package”: “server.request”, “request-id”: “0216c0d1396a/a4QOp6VVZb-007548”}
2019-08-24T23:18:53.680Z INFO tflite/detector.go:266 Detection Complete {“package”: “detector.tflite”, “id”: “”, “duration”: 1.348711977, “detections”: 1}
2019-08-24T23:18:53.681Z INFO server/server.go:88 API Request {“status”: 200, “took”: 1.447221188, “remote”: “192.168.1.183:49864”, “request”: “/detect”, “method”: “POST”, “package”: “server.request”, “request-id”: “0216c0d1396a/a4QOp6VVZb-007549”}

I hope you can help me with fixing this issue.

Hey @andreasfelder start the doods container with environment variable LOGGER_LEVEL=debug and make sure it’s detecting what you expect. It looks like it is detecting it but maybe the confidence is not high enough. I just checked mine and it had a detection and the state was updated properly.

I’d like to add it to Hass and/or HACS but I’ve not done it before. I was going to work on merging it to home assistant proper here in the next week or so.

1 Like

@snowzach This looks great. I am assuming one of the benefits of using DOODS is being able able to offload image processing to another (more powerful) device. Also, can you provide some instructions (or may be even a service) to train our own models.

1 Like

I hope to eventually when I get everything ironed out.

1 Like

Whats the file_out parameter for exactly ?

file_out writes image files with boxes around the detected objects. Same as the tensorflow component.

Do the numbers that appear in states have significance?

EDIT
Number of matches. Got it!!

@tmjpugh the top, bottom, left right numbers indicate the position of the detection. It’s a number that’s from 0-1 that is easily multiplied times the height or width to get the position in the image. The confidence number is how sure 0-100 that the detection is accurate.

Got this working last night, very neat. Trying to figure out how to set some more specific entries in the logbook for what it detects.

If I have it set for cars and person, how can I set triggers and log entries to tell me specifically what it is detecting. Pieces probably already there, just a bit of an HA config noob.

Also, could I use more than one file-out here? I’ve got it saving with a timestamp in the file name now.

Id like one with a timestamp in one folder and one with _latest suffix in another.