New Custom Compontent - Image Processing - Object Detection - DOODS

Hi, I do have some problem with this, or I don’t understand…

I do get images processed:
see the log:

2020-01-24T07:36:58.113Z	DEBUG	tflite/detector.go:249	Got Image	{"package": "detector.tflite", "name": "default", "id": "", "format": "jpeg", "width": 640, "height": 360}
2020-01-24T07:36:58.113Z	DEBUG	tflite/detector.go:251	Resizing Image	{"package": "detector.tflite", "name": "default", "id": "", "format": "jpeg", "width": 300, "height": 300}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "car", "confidence": 57.421875, "location": "0.280306,0.000000,0.990142,0.304017"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "car", "confidence": 50, "location": "0.325107,0.480231,0.371157,0.505891"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "person", "confidence": 43.75, "location": "0.362314,0.168602,0.403541,0.188846"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "car", "confidence": 41.40625, "location": "0.330578,0.435106,0.371805,0.466125"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "person", "confidence": 40.234375, "location": "0.296463,0.118346,0.384501,0.153546"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "person", "confidence": 39.0625, "location": "0.311188,0.158719,0.377427,0.186490"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "car", "confidence": 39.0625, "location": "0.304540,0.291635,0.371835,0.363321"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "person", "confidence": 37.890625, "location": "0.365978,0.216293,0.418236,0.252054"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "car", "confidence": 37.890625, "location": "0.317566,0.363860,0.348099,0.395875"}
2020-01-24T07:36:58.919Z	DEBUG	tflite/detector.go:386	Detection	{"package": "detector.tflite", "name": "default", "id": "", "label": "person", "confidence": 35.546875, "location": "0.323988,0.214074,0.372275,0.245093"}
2020-01-24T07:36:58.919Z	INFO	tflite/detector.go:431	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.766470154, "detections": 10, "device": null}
2020-01-24T07:36:58.920Z	INFO	server/server.go:138	HTTP Request	{"status": 200, "took": 0.826928864, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "e86e6e6391fa/i41MoA2oOZ-004124", "remote": "192.168.200.30:57816"}

I addeed this to the configfile:

  whitelist_external_dirs:
  - /config/tmp

and the conf for doods:

image_processing:
  - platform: doods
    scan_interval: 5
    url: "http://192.168.200.30:6915"
    detector: default
    file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
      - "/tmp/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
    source:
#      - entity_id: camera.baksidan
      - entity_id: camera.groventre
#      - entity_id: camera.stora_entren
    confidence: 20
    labels:
      - name: person
      - name: car
      - name: truck 

But when looking where I think the images files hould be I only have one file from 01:59

groventre_latest

So I initially got it up and running in hassio. Since then, I’ve switched to actually running a dedicated container and performing all API calls to the via Node-RED (not utilizing hassio at all). I like this approach because I have more flexibility with that I do with the json results, however, I don’t see anything within the documentation for how to output the pictures/files like what is built into the hassio configuration. Anyone have any ideas?

@iRanduMi there are a few options.
I just recently setup a push camera for them… you do that in HA to create a webhook, and then in Node-Red use a HTTP request node to push the images to it. Basically set it up so that each time it scans and saves an image, that image is pulled and posted to the webhook.

So far it has worked decently for me.

Cheers!
DeadEnd

Hmmmm, I think I know what you’re trying to suggest. That I basically just have node-red call HA to get the current image? If that it was you’re suggesting, that’s not an issue. I can get a screenshot directly from Blue Iris of what it is scanning, what I can’t get is the image that is outputted by Doods/Tensorflow that has the objects identified (Example: Car - 25%).

The only way I seem to be able to get that image is if I have hassio perform the image_processing.scan. And my goal is to not have HA do any work at all. It’s strictly, Node-red, Doods and Blue Iris communicating with one another.

Have anyone tried this with the Inception v4 model available on the coral?

I tried at one point and couldn’t get it to work. It’s like the inputs/outputs were different from the other inception models.

Loving this component. I have moved to using the inception model on my raspberry pi 4 for more accurate results. The image processing times are all under 10seconds after the initial scan.

What settings can I tweak to speed it up a little more? Currently using numthreads of 1 and numconcurrent 1 with hwaccel false

I’m using this in conjunction with motion sensors, so only triggering scans when motion is detected

Can someone help? I can’t hardly get it to recognize people. Is it because the picture isn’t very clear? Is there something I can do to increase accuracy? Use a different model or something?

- platform: doods
  scan_interval: 100000 #seconds
  url: "http://10.0.1.34:8080"
  detector: default
  file_out:
    - "/config/www/captures/{{ camera_entity.split('.')[1] }}_latest.jpg"
  source:
    - entity_id: camera.channel_1
    - entity_id: camera.amcrest_ip_camera
  confidence: 10
  labels:
    - name: person
      confidence: 10
    - name: car
      confidence: 10
  area:
    covers: true

You could try the inception model. Or remove the car label section to see if it will only detect one or the other

Edit: in that picture you are very small. The image resolution is decreased to speed up the processing in the default model

I’ve installed this to run tflite on a Coral attached to RPI. The accuracy of tflite models is not great and I had too many false positives. Ended up uninstalling DOODs.
I now use Sighthound Video on a separate machine to detect cars and people - serves the purpose and takes a lot less resources than Inception.

Hey all, is there a way to set the minimum detection size? Doods works great for me but I still get birds & cats being detected with high confidence.

I’ve simply defined the things I’m interested in detecting. That means I no longer get notifications for any passing dogs, cats, birds, pizzas, or teddy bears:

- platform: doods
  ...
  labels:
  - person
  - car
  - truck
  - motorcycle
  - bicycle

The issue I have is it picks up the birds and cats as people. I am using the inception detector, I found the default one picked up random things as people. Inception works a lot better but does alert me to birds and cats.

Yeah, I moved to the full Tensorflow models (as discussed way up in this thread) and found those to be much more accurate.

Before then it would detect rocks or shopping bags as people, now it doesn’t.

@Tinkerer
which models are you using now? could you give more detailed information. i have the same problems that rocks are detected as birds and persons

It’s faster_rcnn_inception_v2_coco_2018_01_28 - things are (mostly) working as expected

1 Like

How might I use HA to detect an object that has not moved using DOODS?
Is this possible?

My goal is to detect mail car stopping at box .

@tmjpugh my approach would be as follows:
define image_processing just for the mailbox, “crop” the detection area with the area: configuration big enough to fit the mailman/mail car, set scan interval for some small amount of time, and prepare automation based on timers:

  • check if there’s a man or car in the box
  • if so - trigger timers
  • after time has passed check if man/card are stil there

of course timers should be also for a short time, probably not more than few seconds [depends on how long do the mailman/mailcar stops - you’ve got to figure it out on average].

but I’m only playing with DOODS for a couple of days now, so I do not guarantee that my idea is perfect.

1 Like

@snowzach is there any option to turn on/off the service on the fly? so, without changes in yaml and without rebooting I’d have a way to switch certain image_processing off and on.

real life example:
I’ve got few image_processing instances configured, each for all of the cameras. most of the cameras can detect motion by themselves, so I’ve got scan_interval for them set as 86400 [a full 24h] because I’m triggering image recognition in automations after camera says “there’s movement”. thay way, I’ve got switches to turn on/turn off those automations, if I want to [let’s say I’m washing my car on the driveway, and I don’t want my system be constantly analyzing me running with water and sponge ;)]
one of the cameras - based in the garden - unfortunately can’t give any sign about movement, so I configured scan_interval for that device for about 5 seconds, but I noticed, that in the case I’ve got a lot of movement in the place [I’m mowing the lawn & dog chases me etc.] my network looks like being clogged with constant videostream read by image_processing, and some services struggle to work. it shouldn’t be an issue considering my bandwith config, but it occurs - it is [after turning off constant processing - everything’s back to normal]. it would be cool to be able to make a switch in the dashboard to turn on/off the service in case of “known movement scenario incoming” - like said cutting grass or even party in the garden etc.

I THINK that possibility to change scan_interval on the fly should be enough if there’s no other way, but I don’t know about level of plausibility of that solution, so I’m hoping you can write something more about it being doable or not :slight_smile:

Unfortunately the scan_interval isn’t something that can change on the fly. You’d need to use a time trigger automation to manage that.