New Custom Compontent - Image Processing - Object Detection - DOODS

Yes I did get it working, but it sent me randomly notifications since the object detection noticed differences in number of parked cars during the night for example, even though there was no movements going on. So I disabled it. It was also consuming heavy CPU load with everything enabled.

I realise several months have passed, however I am posting my solution in case others find there way here looking for an answer like I did.
If you use numeric state in the automation you can have it trigger only when cars are detected for over an alloted time, say 15 seconds. Then they are definitely parked.

Here are some ideas that may inspire others in their search for the perfect motion - person detection - notification setup. Mine is far from perfect, but works decently well for me. Thanks @snowzach for doods; pretty cool piece of code.

My hardware consists of

  • 8 IP cameras, all connected by ethernet and PoE, mostly Hikvision and one Foscam
  • An older TS-453A QNAP NAS with 4x 1TB disks in it

The software is

  • QVR Pro for initial motion detection
  • DOODS for object (person) detection, as docker
  • Home Assistant as docker
  • mariadb, mosquitto mqtt, zigbee2mqtt as dockers
  • Node RED as docker

TLDR setup is as follows:

  1. QVR Pro detects motion on camera
  2. QVR Pro notifies Node RED via Action URL call (defined inside motion rules) and sends camera id
  3. Node RED receives GET URL call and extracts camera id from request
  4. Node RED checks presence (no need to do person detection if I’m home)
  5. Node RED triggers DOODS scan and parses scan results
  6. If person detected, Node RED sends notification by phone (including camera id, time and photo)

To expand a bit on the steps above:

  1. QVR pro is software that runs on QNAP NAS and lets you manage video footage recorded by your cameras. In my case, I let the Hikvision cameras handle motion (or intrusion) detection and I let QVR Pro handle motion for the Foscam camera (because I couldn’t get Foscam’s internal motion detection working properly); QVR Pro’s “client” software works nice and smooth for playback and review of motion events; but reviewing them is tedious, so I wanted person detection as well

  2. QVR Pro lets you setup Action URLs within your events; for instance, when motion detected on camera, then call a specific URL. That specific URL points to my Node RED installation (e.g. http://192.168.0.100:1880/qvr/motion) and Node RED can listen for it

  3. Node RED’s “http in” node can receive the calls above on “/qvr/motion” and can extract the camera id (or name) from the request data (I manually send a key/value pair of “camera”/"<camera_id>" in the “Action URL” action parameters and Node RED can retrieve that data in the “http in” node inside “req.headers.camera”

  4. Presence detection (for me and my wife) is performed based on our phones and Life 360 data

  5. If no one is home, Node RED performes a “scan” service call to the “image_processing” domain (which has been set up to use the doods platform inside home assistant) and looks at the result; if something is there, I extract process_time, total_matches and matches.person["0"].score from the “attributes” area in a function node; this is the info I want to use in my notification

  6. Node RED send a notification along the lines of
    Person detected on <camera_name_or_id> at <date_and_time>
    <score>% chance it’s human (<process_time>s / <total_matches> matches
    which looks something like
    Person detected on Front gate cam at 28-02-2022 18:05
    88.3% change it’s human (0.8s / 2 matches)
    I also include an image like <ha_external_url>/local/persons/qvr_front_gate_latest.jpg
    and an Android notification action like “/lovelace/cameras”

I’ve also setup “local_file” platform virtual cameras that point to images like /config/www/persons/qvr_front_gate_latest.jpg so that I can see the latest person detections at a glance inside home assistant.
I also setup mqtt binary sensors inside home assistant, that track when motion AND person detection occurred; Node RED sets the sensor’s state to ON if person detected, waits for X seconds and then sets it to OFF; that way I can easily look at the sensor’s history inside a home assistant history card.

This has worked fairly well for me for about a year now.

Some trouble: The one thing I’m struggling with is processor power. I haven’t done much testing but my average doods detection time is around 20-25s, often going up to 40-50s. The NAS has a Celeron N3150 1.6 GHz quad-core CPU that goes up to 100% when doods is working.
My doods docker log looks like
tensorflow/tensorflow.go:228 Detection Complete {“package”: “detector.tensorflow”, “name”: “tensorflow”, “id”: “”, “duration”: 38.262846488, “detections”: 8}
I’m wondering if “detections: 8” is relevant, because I’m only looking to detect persons, nothing else.

Anyway, hope this inspires other people; personally I would’ve loved more ha/node-red/doods info a year ago when I was setting this up :slight_smile:

Please advise how to maintain objects detection with DOODS so that I could avoid static objects continuous alerts?
E.g. a car that drove through, should be detected and .jpg saved. Once this car is in static position, I do not need to receive notifications (binary sensor) and JPG file should not be overwritten.
Thanks!

This depends on how you trigger the picture creation. I have a built-in delay to actually avoid (!) saving drive-by cars. Mine is triggered by motion detection only and it only fires…‘when motion is detected’ …so it would never trigger on static.
You could also take snapshots each/every minute and only keep them in case of detection changes for which you can use the deepstack attribs.
Or…if it is just reducing notifications, then maybe that trigger is set too sensitive

Hello

I’m trying to add my EdgeTpu with doods2. I have it running as an Add-on in HomeAssistant. So I don’t really know how to add the /dev/bus/usb to it…

Thanks in advance

Whats the difference between specifying area under the main platforms section and then under the labels?

Lets say I want to exclude 38% of the top image and only use 62% of the image, counting form the bottom and up. How do I specify it?

Example from the github page

image_processing:
  - platform: doods
    scan_interval: 1000
    url: "http://<my docker host>:8080"
    detector: default
    file_out:
      - "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
    source:
      - entity_id: camera.front_yard
    confidence: 50
    area:
      # All detections must be inside this area to trigger
      top: 0.1
      # If true the entire detection must be inside the box
      # If false if any part of the detection is in the box it will trigger
      # defaults to true for legacy compatibility
      contains: true
    labels:
      - name: person
        confidence: 40
        area:
          # Exclude top 10% of image
          top: 0.1
          # Exclude right 15% of image
          right: 0.85
          # If true the entire detection must be inside the area
          # If false, if any part of the detection is in the box it will trigger
          # defaults to true for legacy compatibility
          contains: false
      - car
      - truck

I believe it automatically adds the USB device if there is one. Is it not working?

There’s global area and labels and then there are specific region settings for labels. Say you only want to trigger when cat enters the bottom right of the screen but you want a car everywhere.

For your case, I would set the global top: 0.38 and it should only consider the bottom 62% of the image.

Hey There,
First, Thanks for this amazing tool.
I’ve been using doods reliably for a while and I love it.

Is it possible to disable the scan interval completely?
I currently have set it to a high value like 10000, but I’d like it to never scan unless I’m telling it to.

Thanks again

This is a global home assistant setting. You’d have to ask there. Doods doesn’t have any control.

hello there,

I’ve got doods2 working but it seems the object recognition seem slow? I’m not sure what normal times are supposed to be… What should one expect? I’m seeing 15-20seconds?
Also I was expecting my Pi4 to cop it on the CPU, but I haven’t seen a marked increase for the 5 cameras I’m monitoring with it.

you can see yesterday roughly 10% cpu increase when I got it going… seems low?
image

cameras in use

example of my study doods monitoring
image

I’ve noticed in the developer tools states section it flips quite a bit from matching


to not matching… where as I’m never really out of the picture…

is this related to maybe my scanning interval is too fast?
Really want to fine tune it to get best bang for buck so to speak… can I load the cpu some more?

next will be learning all about node red it seems for automations…

cheers!! :slight_smile:

Is there any issue with the add-on?
I can´t get it to work on any platform (RPi32 and 64).
It does not show up in the app store under supervisor

you install it outside of HA, it is a docker install, I followed this YT video… the guy is super clear and makes it a really easy process to follow Object detection with ANY camera in Home Assistant - Tensorflow and DOODS - YouTube

1 Like

Hi,
I was able to use this integration as well with good success. Got to say that is maybe the easiest to get started with image ML in Home Assistant that i’ve found.
I have two issues though:

  1. The DOODS counter sensor for some reason stored the value as a ‘text’, so when checking the history it shows not with a line charts but instead like historical text in colors in a bar (screenshot below). Anyway to output and store it as a number? (Maybe even leveraging the ‘new’ statistical sensor recording?)
  2. Any tried and tested repository of known well-working ML models to apply? I tried getting to use the Google’s ones but found it a bit hard to identify the possible labels to use, or how to configure it.

Thanks!

I’ll see what I can do about the returned value. The models that are included are really some of the best available (at least as far as I know)

1 Like

Can someone please walk me through the dumbass version of how to get this running in Docker but with an edited config file? I don’t know why I’m such an idiot when it comes to understanding Docker images. I’d preferably like to just have a docker-compose file that starts the image and runs it in the background, where I can find the volume easily to swap out the tflite models with my own (I made my own on Google Vertex AI).

Make a directory, put a config.yaml and a models directory with your models in it…
Edit the config.yaml, list model files as models/model_filename.whatever
Create a docker-compose.yaml file with this:

version: '3.2'
services:
  doods:
    image: snowzach/doods2:latest
    volumes:
      - ./config.yaml:/opt/doods/config.yaml
      - ./models:/opt/doods/models
    ports:
      - "8080:8080"

from that directory with the compose file, config.yaml and the models directory, run docker-compose up -d or whatever variation of docker compose you have (sometimes docker compose up -d)
Connect to http://your_server or localhost:8080/ and test it out.

1 Like

Thanks for that! I’ll give it a whirl!

Hi,
I just came through another issue. For some reason when i setup a generic camera, based on a still picture whose source is a .gif, the DOODs interface (both the HA addon method, and directly in the website of DOODS) say Internal server error (error logs pasted below). However, if i directly paste in the DOODs website the original .gif source url it works fine.
Based on the logs first line, seems like gif is not supported, yet I can feed the DOODS website with the original .gif url and works just fine?

  • Any ideas or explanation why this might be happening?

Thanks.

PD: This scenario can be tested against this public feed: http://www.bcn.cat/transit/imatges/DiagonalMCristina.gif

2022-11-21 13:55:57,787 - uvicorn.access - INFO - 192.168.10.26:52811 - "POST /image HTTP/1.1" 500
[mpjpeg @ 0x2fe07c50] Unexpected Content-Type : image/gif
[mpjpeg @ 0x2fe07c50] Expected boundary '--' not found, instead found a line of 21 bytes
[mpjpeg @ 0x2fe07c50] Expected boundary '--' not found, instead found a line of 9 bytes
2022-11-21 13:56:52,842 - uvicorn.error - ERROR - Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/applications.py", line 208, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 112, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 57, in __call__
    task_group.cancel_scope.cancel()
  File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 572, in __aexit__
    raise ExceptionGroup(exceptions)
anyio._backends._asyncio.ExceptionGroup: 2 exceptions were raised in the task group:
----------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 55, in __call__
    response = await self.dispatch_func(request, call_next)
  File "/usr/local/lib/python3.8/dist-packages/prometheus_fastapi_instrumentator/instrumentation.py", line 172, in dispatch_middleware
    raise e from None
  File "/usr/local/lib/python3.8/dist-packages/prometheus_fastapi_instrumentator/instrumentation.py", line 169, in dispatch_middleware
    response = await call_next(request)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 37, in call_next
    raise RuntimeError("No response returned.")
RuntimeError: No response returned.
----------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 30, in coro
    await self.app(scope, request.receive, send_stream.send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 656, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 259, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 61, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 226, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 159, in run_endpoint_function
    return await dependant.call(**values)
  File "/opt/doods/api.py", line 113, in image
    detect_response = self.doods.detect(detect_request)
  File "/opt/doods/doods.py", line 135, in detect
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.5.5) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
2022-11-21 13:56:52,900 - uvicorn.access - INFO - 192.168.10.26:52817 - "POST /image HTTP/1.1" 500