New Custom Compontent - Image Processing - Object Detection - DOODS

Would it help if I added labels for the stuff I am falsely detecting?

This is my current config.

image_processing:
  - platform: doods
    scan_interval: 2
    url: "http://10.0.0.10:8080"
    detector: inception
    file_out:
      - "/config/www/doods/{{ camera_entity.split('.')[1] }}_latest.jpg"
      - "/config/www/doods/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
    source:
      - entity_id: camera.driveway
    confidence: 80
    labels:
      - name: person
        confidence: 88
        area:
          top: 0.45
          covers: false
      - name: car
        confidence: 70
        area:
          top: 0.42
          covers: false
      - name: truck
        confidence: 70
        area:
          top: 0.42
          covers: false

Now it’ll only report those things…

If you have the issues of something being identified as a person that isn’t a person this won’t help you filter those things out…because it thinks those things are people.

The only thing you could do to try and avoid birds being detected as a person is try a different model.
I haven’t had good luck trying anything other than the default models myself, but maybe someone else here has.

Or build your own.

Awesome project! Quick question if I may. I’m planning on using this to turn on my security lights when a person is detected in the camera.

I have 2 methods and not sure which would be best.

  1. Set image detection to run every second and analyse the images from all my cameras
  2. Use the onboard motion sensor on the camera to trigger an automation which sends a snapshot to the image detector to analyse.

I believe onboard motion sensor on the camera, shall save your resources and increase efficiency. I have 4 camera and checking all those on every second put a huge load on my tiny computer. So I moved to motion sensor (PIR & Camera) and then trigger the lights. Thanks

Hi @snowzach
Thank you for your work!
I’m having some problems to make it work: the entity always shows:

matches: {}
summary: {}
total_matches: 0
process_time: 0
friendly_name: Doods camera_1

My setup is an intel nuc with a docker container with Home Assitant and a container with Doods.
The config is the following:

stream:
camera:
  - platform: mjpeg
    name: Camera 1
    still_image_url: "..."
    mjpeg_url: "http://192.168....:8081/mjpeg"
#camera is taken from motion eye installed from Supervisor
image_processing:
  - platform: doods
    url: http://192.168....:8080
    detector: tensorflow
    source:
      - entity_id: camera.camera_1
    file_out: /config/www/camera/camera1/doods.jpg

The log from Home Assistant:

Dettagli registro (WARNING)
Logger: homeassistant.components.doods.image_processing
Source: components/doods/image_processing.py:286
Integration: doods (documentation, issues)
First occurred: 12:27:27 (1 occurrences)
Last logged: 12:27:27

Unable to process image, bad data

Logs from DOODS:

 sudo docker run -e LOGGER_LEVEL=debug -p 8080:8080 snowzach/doods:latest 
2020-11-03T11:25:42.180Z	DEBUG	detector/detector.go:61	Configuring detector	{"package": "detector", "config": {"name":"default","type":"tflite","model_file":"models/coco_ssd_mobilenet_v1_1.0_quant.tflite","label_file":"models/coco_labels0.txt","num_threads":4,"num_concurrent":4,"hw_accel":false,"timeout":120000000000}}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:157	Tensor Output	{"package": "detector.tflite", "name": "default", "n": 0, "name": "TFLite_Detection_PostProcess", "type": "Float32", "num_dims": 3, "byte_size": 160, "quant": {"Scale":0,"ZeroPoint":0}, "shape": [1, 10, 4]}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 0, "dim": 0, "dim_size": 1}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 0, "dim": 1, "dim_size": 10}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 0, "dim": 2, "dim_size": 4}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:157	Tensor Output	{"package": "detector.tflite", "name": "default", "n": 1, "name": "TFLite_Detection_PostProcess:1", "type": "Float32", "num_dims": 2, "byte_size": 40, "quant": {"Scale":0,"ZeroPoint":0}, "shape": [1, 10]}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 1, "dim": 0, "dim_size": 1}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 1, "dim": 1, "dim_size": 10}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:157	Tensor Output	{"package": "detector.tflite", "name": "default", "n": 2, "name": "TFLite_Detection_PostProcess:2", "type": "Float32", "num_dims": 2, "byte_size": 40, "quant": {"Scale":0,"ZeroPoint":0}, "shape": [1, 10]}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 2, "dim": 0, "dim_size": 1}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:160	Tensor Dim	{"package": "detector.tflite", "name": "default", "n": 2, "dim": 1, "dim_size": 10}
2020-11-03T11:25:42.183Z	DEBUG	tflite/detector.go:157	Tensor Output	{"package": "detector.tflite", "name": "default", "n": 3, "name": "TFLite_Detection_PostProcess:3", "type": "Float32", "num_dims": 1, "byte_size": 4, "quant": {"Scale":0,"ZeroPoint":0}, "shape": [1]}
2020-11-03T11:25:42.183Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite", "model": "models/coco_ssd_mobilenet_v1_1.0_quant.tflite", "labels": 80, "width": 300, "height": 300}
2020-11-03T11:25:42.183Z	DEBUG	detector/detector.go:61	Configuring detector	{"package": "detector", "config": {"name":"tensorflow","type":"tensorflow","model_file":"models/faster_rcnn_inception_v2_coco_2018_01_28.pb","label_file":"models/coco_labels1.txt","num_threads":4,"num_concurrent":4,"hw_accel":false,"timeout":120000000000}}
2020-11-03 11:25:42.544411: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations:  SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-03T11:25:42.554Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}
2020-11-03T11:25:42.555Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}
#Here I restarted Home Assistant
2020-11-03T11:27:17.733Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.001240918, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "b675db114025/lfsycSGKr2-000001", "remote": "192.168.188.37:53708"}
2020-11-03T11:27:17.740Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.000559914, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "b675db114025/lfsycSGKr2-000002", "remote": "192.168.188.37:53710"}


How strange, I’ve just set this up and am getting the exact same error! My camera is also from a motioneye install. Hopefully someone can help! I’m also wondering if doods can process the image motioneye creates from a motion event directly? I’ve seen people mention ‘sending’ and image to doods for it to process but I have no idea how to achieve this!

Hi, everyone having a bit of trouble and hoping someone could point me in the right direction.

I have a reolink camera integrated and streaming in HA. It streams fine.
I set up DOODS in a docker on unraid that is also running fine. I configured DOODS to look at the reolink camera which it is doing. The image processing entity is created and it is coming back with positive ids for various things.

I am stuck on the image out part. I am getting the following error:

Logger: homeassistant.components.image_processing
Source: components/image_processing/__init__.py:128
Integration: Image processing ([documentation](https://www.home-assistant.io/integrations/image_processing), [issues](https://github.com/home-assistant/home-assistant/issues?q=is%3Aissue+is%3Aopen+label%3A%22integration%3A+image_processing%22))
First occurred: 7:25:07 AM (1785 occurrences)
Last logged: 5:18:10 PM

Error on receive image from entity: Unable to get image

Maybe I am missing something but its unclear to me how DOODS can be processing the image from the camera and returning positive ids but at the same time can’t get the image.

hello, my dears. I want to add more than one camera to Doods. but when i do so the bot doods cameras read the same captured image for only one camera. the other camera never detected me at all. the other camera is onvif.

# Example configuration.yaml entry for doods
image_processing:
  - platform: doods
    url: "http://192.168.1.110:8080"
    detector: default
    scan_interval: 2
    confidence: 60
    labels:
      - name: person
        confidence: 70
      - name: laptop
        confidence: 65
      - name: cup
        confidence: 65
      - name: cell phone
        confidence: 65
    source:
      - entity_id: camera.entrancemjpeg
      - entity_id: camera.living
    file_out:
      - /config/www/doods/entrance.jpg
      - /config/www/doods/living.jpg

For the output file try:

file_out:
  - "/config/www/doods/{{ camera_entity.split('.')[1] }}.jpg"

Then it’s based on the entity

Where can I find a list of object type which can be detected. I am curious whether I can detect deer, cow and hogs.

2 Likes

Can someone help me understand this data that flows into HA. For matches from DOODS I get this:

matches: 
toilet:
  - score: 88.75058
    box:
      - 0.49192888
      - 0.32270554
      - 0.871392
      - 0.4992687

So there is a box with four numbers and those numbers mean what? I assume it’s some combination Top, Bottom, Left, Right, but which is which in what order? If whatever number is top is 1.0 does that mean the top of the detected object is far away from the top of the image or is the detected object touching the top of the image?

It means the box around the discovery is there. The axis if the image run from 0.0 to 1.0, so 0.49 is roughly in the middle.

Doods draws the box on the image it returns, so you can use that to understand the values.

So the first number is which though? The Top, Bottom, Left, Right?

From memory it’s measured from the top left. Check one of your images to confirm.

hi, doods it works for me every other day, with the image still for several days behind, can you tell me why?

Hi there, thanks for this great component. I’ve gotten it running in docker, with successful integration of a Coral edgetpu USB device with an edgetpu model. I love the speed, hate the object detection quality. Has anyone experimented with converting other tensorflow models to tensorflow lite and compiling for edge tpu? I’m thinking of trying it but would like to hear if there are any success stories.

Thanks again.

I am having issues with doods (custom component) on Home Assistant. Doods is not processing my local_file snapshots, and I don’t understand why. The image_processing entity is not detecting anything. When I check the log output from doods, I see successfull requests.

Test image
.


Configuration.yaml

camera:
  - platform: generic
    name: 'Front camera'
    still_image_url: http://192.168.178.2:8765/picture/1/current/?_username=admin&_signature=62cae218d87fda544fb7df51c0366818906bfa8d
    stream_source: rtsp://192.168.178.58/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
    username: admin
    password: password
    authentication: basic
  - platform: local_file
    name: "Front camera from Doods"
    file_path: "/config/cameras/front_camera_snapshot_latest.jpg"
  - platform: local_file
    name: "Front camera snapshot"
    file_path: "/config/cameras/front_camera_snapshot.jpg"
  - platform: mjpeg
    name: 'Front camera motion'
    still_image_url: http://192.168.178.2:8765/picture/1/current/?_username=admin&_signature=62cae218d87fda544fb7df51c0366818906bfa8d
    mjpeg_url: http://192.168.178.2:8081


image_processing:
  platform: doods
  scan_interval: 86400
  url: "http://192.168.178.4:8080"
  timeout: 30
  detector: default
  confidence: 70
  source:
    - entity_id: camera.front_camera
    - entity_id: camera.front_camera_snapshot
  file_out:
    - "/config/cameras/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/config/cameras/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
  labels:
    - person
    - car
    - truck
    - cat
    - bird
    - dog
    - motorcycle
    - bicycle

Doods component output log:

2020-11-15T12:15:04.265+0100	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite", "model": "/opt/doods/models/coco_ssd_mobilenet_v1_1.0_quant.tflite", "labels": 80, "width": 300, "height": 300}
2020-11-15T12:15:09.341+0100	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "inception", "type": "tensorflow", "model": "/share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}
2020-11-15T12:15:09.814+0100	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}
2020-11-15T12:16:11.541+0100	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.120239777, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "d5f40609-doods/LEoi7kxojx-000001", "remote": "192.168.178.4:46398"}
2020-11-15T12:16:11.585+0100	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.010758314, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "d5f40609-doods/LEoi7kxojx-000002", "remote": "192.168.178.4:46412"}
2020-11-15T12:18:48.712+0100	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.485099608, "detections": 10, "device": null}
2020-11-15T12:18:48.741+0100	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.111441653, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "d5f40609-doods/LEoi7kxojx-000003", "remote": "192.168.178.4:46588"}
2020-11-15T12:19:11.938+0100	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.107333965, "detections": 10, "device": null}
2020-11-15T12:19:11.940+0100	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.500216167, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "d5f40609-doods/LEoi7kxojx-000004", "remote": "192.168.178.4:46608"}
2020-11-15T12:19:23.126+0100	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.187490978, "detections": 10, "device": null}

Configuration doods component:

server:
  port: '8080'
auth_key: ''
doods.detectors:
  - name: default
    type: tflite
    modelFile: /opt/doods/models/coco_ssd_mobilenet_v1_1.0_quant.tflite
    labelFile: /opt/doods/models/coco_labels0.txt
    numThreads: 4
    numConcurrent: 2
    hwAccel: false
  - name: inception
    type: tensorflow
    modelFile: /share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb
    labelFile: /share/doods/coco_labels1.txt
    numThreads: 4
    numConcurrent: 2
    hwAccel: false

The detection regions go Top, Left, Bottom, Right