New Custom Compontent - Image Processing - Object Detection - DOODS

Hi Everyone,

Trying to give this a go with my GPU and using the CUDA version, but running into this error. Has anyone seen something similar?

2020-08-17 22:02:11.452087: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:03:00.0 name: GeForce GTX 1660 computeCapability: 7.5
coreClock: 1.83GHz coreCount: 22 deviceMemorySize: 5.80GiB deviceMemoryBandwidth: 178.86GiB/s
2020-08-17 22:02:11.452128: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
2020-08-17 22:02:11.452136: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-17 22:02:11.452142: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-17 22:02:11.452147: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-17 22:02:11.452153: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-17 22:02:11.452158: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-17 22:02:11.452163: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-17 22:02:11.452626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-08-17 22:02:11.452669: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-17 22:02:11.452687: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091] 0
2020-08-17 22:02:11.452691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0: N
2020-08-17 22:02:11.453208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5384 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1660, pci bus id: 0000:03:00.0, compute capability: 7.5)
2020-08-17 22:02:11.472870: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-08-17 22:02:11.476525: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-08-17T22:02:11.477Z e[34mINFOe[0m server/server.go:138 HTTP Request {"status": 500, "took": 0.028243471, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "c3ee4697cb59/WiBylo08OP-003932", "remote": "192.168.1.4:50960"}

Here is my DOODS yaml:

doods:
  detectors:
    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      width: 1920
      height: 1080
      numThreads: 4
      numConcurrent: 4 
      hwAccel: false
      timeout: 1m

And here is my Home Assistant config:

image_processing:
  - platform: doods
    scan_interval: 100000
    url: "http://192.168.1.4:7788"
    detector: tensorflow
    confidence: 30
    file_out: 
      - "/config/www/video/{{ camera_entity.split('.')[1] }}_doods_latest.jpg"
    labels:
      - name: person
      - name: car
      - name: dog
      - name: truck
    source:
      - entity_id: camera.porch
      - entity_id: camera.driveway
1 Like

Going on memory here, but I want to say your detector should be inception and not tensorflow in your HA config.
Double check the config example.

Hm, the example shows a similar config to what I have. This is from the Github page:

    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      numThreads: 4
      numConcurrent: 4
      hwAccel: false
      timeout: 2m

You can see in logs the processing time. You do not want to exceed the time it take for doods to receive and process the images the return processed images back.

I send images to doods every second for (x3)cam and every 3-5seconds for (x5) cam. Full processing and return for one cam is about .05 second or less(I forget since I not check in a while). In the end I have no issues but on the occasion it exceeds the time an error occur and it stop processing the image (you miss the object).

Previously I have slower server and slower Coral and many times doods would timeout and not return image. When that occur I reduce the time between images to resolve. Basically the timing you may use depends on hardware but you cannot exceed the time doods needs to complete processing.

That’s fair enough, I was just making an assumption there and clearly I was wrong! You know what they say…To assume makes an ass out of u and me (or in this case, just me :man_facepalming:)

Thanks for the explanation!

Okay my mistake…ignore what I said.
I went back and looked and it was because I had named the detector inception that I needed to call it that in my configuration.yaml.

Let us know if you get it going with the GPU.
Not sure I can until Zach releases an add-on version of it though. I’m using supervised and his add-on.
Don’t really want to add a bunch of containers on the host.

No problem. Thanks for the thoughts. I have a lead from this Github thread: https://github.com/tensorflow/tensorflow/issues/24496

Now I just need to figure where to modify the code to insert:

config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
sess.as_default()

Has anyone ever gotten the Coral Accelerator to work with DOODS while running Home Assistant on MacOS?

After fighting for a couple days to figure this out enough to get things up and running I thought I had things perfect with the tensor flow model but all through the early AM it was picking up the side of my car as a person. Unfortunately with higher confidence than it picks up a person at night. Had anyone come up with a clever solution to this? (confidence in config set to 50 and only set to detect person)

For anyone else new looking for configs here is mine (home assistant on docker and doods on docker)

image_processing:
  - platform: doods
    scan_interval: 10000
    url: "http://192.168.1.179:8080"
    detector: default
    file_out:
      - "/config/www/tmp/latest.jpg"
      - "/config/www/tmp/{{camera_entity.split('.')[1]}}_{{now().strftime('%Y%m%d_%H%M%S')}}.jpg"
    source:
      - entity_id: camera.frontdoor
    labels:
      - name: person
        confidence: 75

You could exclude that area of the camera view if you don’t want scan it.

Hi all, thank you for the DOODS integration and for the discussion on this thread. I was trying to implement DOODS with this model: https://tfhub.dev/google/aiy/vision/classifier/birds_V1/1
I have a .pb model file and a .csv labels file (each line formatted as id, description). I change the extension to .txt of this file.
The config.yaml file is:

doods:
  detectors:
    - name: bird_classifier_v1_1
      type: tflite
      modelFile: models/birds_classifier_v1_1.pb
      labelFile: models/birds_classifier_v1_1.txt
      numThreads: 1
      numConcurrent: 1
      hwAccel: false
      timeout: 2m

The models directory contains:
models/birds_classifier_v1_1.pb
models/birds_classifier_v1_1.txt

I started docker image with:
docker run -it -v /Projects/AI/bird-feeder-classifier/models:/opt/doods/models -v /Projects/AI/bird-feeder-classifier/config.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest

But I get the following error:

ERROR   detector/detector.go:73 Could not initialize detector bird_classifier_v1_1: could not load model models/birds_classifier_v1_1.pb        {"package": "detector"}
FATAL   detector/detector.go:83 No detectors configured {"package": "detector"}

If I used instead of tflite, tensorflow as type in config.yaml I get:

ERROR   detector/detector.go:73 Could not initialize detector bird_classifier_v1_1: Could not import model: Invalid GraphDef    {"package": "detector"}
FATAL   detector/detector.go:83 No detectors configured {"package": "detector"}

What could be the problem? Is possible to DOODS to use this kind of pre trained model?

Thank you!

I am also interested to know this. I have a custom trained model on Tensorflow 2.2. Whenever I use this model, it shot an error.

ERROR	detector/detector.go:73	Could not initialize detector tflite2: unsupported tensor input type: Float32	{"package": "detector"}

Appreciate any help. Thanks

I also seem to suffer from the detector timeouts with the coral USB TPU.
I run it on USB 3.2, so power should not be an issue. I do forward the USB device as USB3 via proxmox to a VM.

Works fine for a few pictures, but then I end up with:

2020-10-09T00:39:00.379Z INFO server/server.go:139 HTTP Request {“status”: 200, “took”: 0.132931232, “request”: “/detect”, “method”: “POST”, “package”: “server.request”, “request-id”: “3dad0ee1e2cf/7pBhfbLvK8-000003”, “remote”: “192.168.1.26:44040”}

2020-10-09T00:39:38.459Z ERROR tflite/detector.go:301 Detector timeout {“package”: “detector.tflite”, “name”: “edgetpu”, “device”: {“Type”:1,“Path”:“/sys/bus/usb/devices/3-1”}}

How often you sending images?
How many camera

At the moment ever 15 seconds and 4 cameras.

Given that the TPU does the detection in 0.1 seconds unless it gets stuck, that seems like it should be an easy job.

This isn’t exactly true as I have run across this exact issue. With Windows Server 2019 you can not utilize a GPU through Docker as it runs through a Hyper-V VM which can not access the GPU hardware directly. The only way I can see to utilize the GPU for TensorFlow of any kind on a Windows Server 2019 system is to run the program on bare metal. I would love to know how to do this with DOODs, though from first glance it appears to rely heavily on Docker and this probably isn’t going to be easy.

Hey, thanks for this awesome component.

It works great apart from detecting birds and cats as 90%+ persons sometimes. :slight_smile:

Is the hassio addon still updated? Am I missing out on any improvements by using it?

As explained elsewhere in the thread, change the model :wink: The tflite models are fast, and low quality.

Im already on:

modelFile: /share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb
labelFile: /share/doods/coco_labels1.txt

Is there a better option if I am detecting cats?

That’s what I use. Can’t speak for cats, but it’s generally accurate for people, bicycles, cars, motorbikes, dogs, and birds…

1 Like