New Custom Compontent - Image Processing - Object Detection - DOODS

/dev/apex_0

I thought it would be convenient to create automations based on event triggers (eg: person detected or cat detected etc). but no big deal. I can achieve this in other ways. thanks.

sorry for the inconvenience, finally it seems that it’s not because of DOODS

thanks again for the addon

@snowzach

Hi, great integration!

The docs suggest that you set the scan_interval to 10000 and run an automation when a binary_sensor or camera detects motion. For how long will DOODS analyse the camera when the automation triggers?

If it is only for a short time then im wondering how to alter my automation to keep repeating for a set time.

Im wondering because if it only analyses the camera for say 10 seconds but there is a 20 lag in what i see in my camera to what triggered the binary sensor then it might not produce an actual result? Also, I’m runnin this on a Pi4 so waiting for the result will also come in to the timing of it all.

PS. My exact times might not make sense but i hope you understand what i am trying to say

Thanks

Hi @Juggler

I am about to implement this info my set up and just wondered if you had any changes to your code since your initial configuration? Moreover, I would be interested to hear if you discovered a suitable Lovelace view to manage the recordings?

Once. The image_processing.scan service tells Doods to scan the image/feed once and report on the findings.

So potentially if my camera feed has a lag then DOODS could miss it and remain 0? Is there a workaround to stop this?

Sure, build logic in to your automation :wink:

My approach is:

  1. Snapshot the camera in question
  2. Run DOODS on the image, not the camera stream

No further changes (yet). Still working on implementing a Lovelace view.

Ok thanks!

One final thing you might be able to help me with please. I am stealing your set up but because im in hassio i confiused what i should do with the shell script you wrote. I have placed it in /config/automation/motion_video.sh but not sure what to change the code to inside the script - any help would be appreciated.

I have created a package with all of your code and pasted it below.

#!/bin/bash

if [ $1 = 'mv' ]
then
  mv /config/tmp/$2_temp.mp4 /config/tmp/$2_$3.mp4
else
  if [ $1 = 'rm' ]
  then
    rm /config/tmp/$2_temp.mp4
  fi
fi
shell_command: 
  motion_video: "/config/automation/motion_video.sh {{action}} {{camera}} {{time}}"

image_processing:
- platform: doods
  scan_interval: 10000
  url: "http://192.168.0.200:8080"
  timeout: 60
  detector: inception
  source:
    - entity_id: camera.front_door
  file_out:
    - "/config/www/snapshot/{{ camera_entity.split('.')[1] }}_latest.jpg"
    - "/config/tmp/{{ camera_entity.split('.')[1] }}_{{ state_attr('input_datetime.lastmotion_'~camera_entity.split('.')[1], 'timestamp') | timestamp_custom('%Y%m%d_%H%M') }}.jpg" 
  confidence: 70
  labels:
    - name: person
    - name: car
    - name: truck



automation:
- alias: "camera motion on front door"
  trigger:
    platform: state
    entity_id: binary_sensor.frontdoor_motion
    to: 'on'
  # condition:
  #   - condition: template
  #     value_template: "{{ as_timestamp(now()) - as_timestamp(states.automation.camera_motion_on_front_door.attributes.last_triggered) | int > 60 }}"
  action:
    - service: input_datetime.set_datetime
      entity_id: input_datetime.lastmotion_front_door
      data_template:
        datetime: "{{ now().strftime('%Y-%m-%d %H:%M:%S') }}"
    - service: camera.record
      data:
        entity_id: camera.front_door
        filename: "/config/tmp/front_door_temp.mp4"
        duration: 20
    - service: image_processing.scan
      entity_id: image_processing.doods_front_door

- alias: "doods scan front_door"
  trigger:
    platform: state
    entity_id: image_processing.doods_front_door
  condition:
    condition: or
    conditions:
      - condition: template
        value_template: "{{ 'car' in state_attr('image_processing.doods_front_door', 'summary') }}"
      - condition: template
        value_template: "{{ 'truck' in state_attr('image_processing.doods_front_door', 'summary') }}"
      - condition: template
        value_template: "{{ 'person' in state_attr('image_processing.doods_front_door', 'summary') }}"
  action:
    - service: notify.mobile_app_stephen_s20
      data:
        message: "Motion Detected"
        data:
          image: "https://#########################.ui.nabu.casa/local/snapshot/front_door_latest.jpg"
    - delay: "00:00:25"
    - service: shell_command.motion_video
      data_template:
        action: "mv"
        camera: "front_door"
        time: "{{ state_attr('input_datetime.lastmotion_front_door', 'timestamp') | timestamp_custom('%Y%m%d_%H%M') }}"

Sorry… I have not use hassio. I’m running my HA in a docker container on Ubuntu.

No worries! I think its all working as intended. I get the followin error but the video clip seems to get renamed anyway.

2020-06-03 18:15:44 ERROR (MainThread) [homeassistant.components.shell_command] Error running command: `/config/automation/motion_video.sh {{action}} {{camera}} {{time}}`, return code: 1
NoneType: None

Hey @snowzach,

Is it possible to have two areas defined or an excluded area?

As you can see from my driveway, one straightline box is not ideal for every situation. But if i could have two areas or exclude my neighbours driveway - that would be perfect!

By default if the detection is anywhere in the box, it will trigger… Lower your straight line box. Or, you can define a box for a specific label (car) and lower it just for that. See the docs in the first post…

Sorry for being dense but why would running DOODS on a snapshot be better than running it on the stream - are they both not the same image coming from the same device?

Timing…

So potentially if my camera feed has a lag then DOODS could miss it and remain 0? Is there a workaround to stop this?

If you snapshot when the event of interest happens, then you’ll have an image of whatever it is. You process that image and you get Doods looking at the thing you want.

If there’s issues with the feed, then by the time Doods gets to it maybe whatever you’re interested in has moved out of frame…

Thanks for explaining @Tinkerer.

Is there a way for HA to retain the DOODS state between restarts? I have two cars in my driveway and if i restart the counter goes to 0 and then the next time there is motion DOODS starts to scan and identifies the two cars again.

This is a great add-on. Thanks to everyone for the detailed explanations of how to get things up and running. I’ve been somewhat successful.

I have a question about how to break a video down into still images and then analyze those I was hoping I could get some help with. In my setup I have a ring doorbell triggering a motion automation, that sends an image from the camera to doods, but it’s right at the start of the recording and usually nothing interesting has happened yet.

Can I use a configuration like the one below to break a video down into still images?

Shell command:
  ffmpeg -i input.flv -vf fps=1 out%d.png

I was going to use a python script to get the videos. Something like:

# obtain ring doorbell camera object
# replace the camera.front_door by your camera entity
ring_cam = hass.states.get("camera.front_door")

subdir_name = f"ring_{ring_cam.attributes.get('friendly_name')}"

# get video URL
data = {
    "url": ring_cam.attributes.get("video_url"),
    "subdir": subdir_name,
    "filename": ring_cam.attributes.get("friendly_name"),
}

# call downloader integration to save the video
hass.services.call("downloader", "download_file", data)

Is there a way to add the shell command to the python script?

how do i send the frames to doods?

Hey @snowzach,

I’m currently using the inception model with hassio addon on a Rpi 4 4GB and its going really well. Most scan times are 4 seconds and the odd one is 16 seconds. I just bout a Coral USB and wondering how I integrate that into DOODS?

My current config is:

server:
  port: '8080'
auth_key: ''
doods.detectors:
  - name: inception
    type: tensorflow
    modelFile: /share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb
    labelFile: /share/doods/coco_labels1.txt
    numThreads: 1
    numConcurrent: 1
    hwAccel: false

Any assistance would be appreciated

Thanks

@Eeeeeediot I can help with that. We hwAccel to true. Download edge TPU models and labels from here: https://coral.ai/models/

I had to reset HA a few times for the usb to recognize.