DOODS - Dedicated Outside Object Detection Service
I’ve created a service called DOODS that allows you to upload images and do object detection on them. This was born out of my frustration in getting tensorflow running on my various devices along with Home Assistant.
You can stand up a DOODS instance using Docker and remotely send it images to process and it will send you back the detected objects in the image. It has support currently for tensorflow and tensorflow lite images including support for the Coral EdgeTPU hardware accelerator.
I have also written a hass component for it that is configured pretty closely to the tensorflow component and the output result is exactly the same.
Assuming people like it, I’d love for this to get included in the main distribution. This might be a solution for the hass.io people to pull the massive tensorflow component out of the docker image.
The docker hub for the server is here: https://hub.docker.com/r/snowzach/doods
The source for the server is here: https://github.com/snowzach/doods
The hass custom component is here: https://github.com/snowzach/hassdoods
NEW!: Hass.io Repo: https://github.com/snowzach/hassio-addons.git
You can start the server with:
docker run -it -p 8080:8080 snowzach/doods:latest
Right now it only supports 64bit x86. I am working on an arm/arm64 image eventually.
It includes the basic mobilenet model by default and will work out of the box. There is also a fetch_models.sh
script with the server (https://raw.githubusercontent.com/snowzach/doods/master/fetch_models.sh) file in the source repo that will download a handful of models including support for the edgetpu and spit out an example.yaml
configuration file.
You can run that with:
docker run -it -v $PWD/models:/opt/doods/models -v $PWD/example.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest
You can then install the hass component in your config directory under custom_components/doods
and an example config file looks similar to tensorflow with:
image_processing:
- platform: doods
scan_interval: 1000
url: "http://<my docker host>:8080"
detector: tensorflow
file_out:
- "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
source:
- entity_id: camera.front_yard
confidence: 50
labels:
- name: person
confidence: 40
area:
# Exclude top 10% of image
top: 0.1
# Exclude right 15% of image
right: 0.85
- car
- truck
Update June 2020
You can now clone the doods server and there is a base Dockerfile in there that will optimize for whatever CPU you happen to be running on.
It also includes the inception model by default using the detector called tensorflow