Face and person detection with Deepstack - local and free!

Of course. I go out in the driveway and stand there and wave my arms, etc. The automation is triggered. I’m certainly a person. Since implementing, lots have people have walked across the area. Automation is always triggered, never any .jpg written. Where do I go next? I’m wondering if my camera feed (Ubiquiti/UniFi) is correct for the service.

Is it that you don’t get any jpg generated or that you don’t get the latest jpg generated?
I get the latest only if I comment out the lines with the ROI from the config. It doesn’t look like you included it so that likely isn’t your issue. Just curious if we are having similar problems here.

I don’t get any jpg generated, ever. So, I’m still trying to figure out if the deepstack service is even being called/used or if I have some other setup/configuration problem.

You can view the deepstack logs to check the requests made.

Is anyone having issues w/ the deepstack_object not being reset after a successful detection?

It is intermittent and sometimes resets after a minute or so, other times it never resets and requires a restart. I test the value of this object in various automations to know when a car or truck is identified or if I’ve already started another automation. This is the syntax I’ve been using. If there is a better way or a way to reset the object to 0 once it’s been triggered, that would work as well.

I’m using a Google Coral on a 2nd Pi, not that it should make any difference…

image

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.251
    port: 5000
    confidence: 80
    save_timestamped_file: true
    save_file_folder: /config/www/snapshots
    targets:
      - car
      - truck
    source:
      - entity_id: camera.frontgate_detector
        name: frontgate_object_detection

and my automation which won’t trigger when there’s motion because the object is still at 1 from the detection 2 hours ago

- id: '1589324257510'
  alias: Front Gate Motion Detection Automation
  trigger:
  - entity_id: binary_sensor.frontgate_detector_motion_detected
    from: 'off'
    platform: state
    to: 'on'
  condition:
  - after: sunrise
    before: sunset
    condition: sun
  - condition: not
    conditions:
    - condition: template
      value_template: '{{  is_state("image_processing.frontgate_object_detection",
        "1") }}'
  action:
  - data: {}
    entity_id: image_processing.frontgate_object_detection
    service: image_processing.scan

when it gets stuck, the value template is always evaluates to 1. It should reset to 0.

This iteration worked, but just barely:

ANY advice appreciated.

The state is 1 until a new state of 0 is recorded. Sensors do not automatically reset to any value, no sensors do this. A better approach is to use the events fired by the integration

Ok, you’ve got my attention. What event is fired when and object is detected? And is there an example of how I can use it?

Edit: I found references to events. this is my test automation. it NEVER fires, even though Deepstack is detecting objects (me = person) and at 98.5% which isover the 85% threshold I set AND it is even saving images with bounding boxes.

  alias: test event
  description: ''
  trigger:
  - event_data:
      object: person
    event_type: image_processing.object_detected
    platform: event
  - event_data:
      object: truck
    event_type: image_processing.object_detected
    platform: event
  - event_data:
      object: car
    event_type: image_processing.object_detected
    platform: event
  condition: []
  action:
  - delay: 00:00:01

Questions:

  • why does the automation not trigger? These should be OR statements, not AND, right?
  • what does ROI stand for? is that the count of objects detected over the threshold?
  • is there a log (not listener) anywhere that would show actual events being fired?

lots to unpack here.

EDIT (hopefully final edit): I set up listening to all (*) events and I found this:
Event 42 fired 7:52 AM:

    "event_type": "deepstack.object_detected",
    "data": {
        "bounding_box": {
            "height": 0.701,
            "width": 0.14,
            "y_min": 0.27,
            "x_min": 0.411,
            "y_max": 0.971,
            "x_max": 0.551
        },
        "box_area": 0.098,
        "centroid": {
            "x": 0.481,
            "y": 0.621
        },
        "name": "person",
        "confidence": 91.016,
        "entity_id": "image_processing.frontgate_object_detection"
    },
    "origin": "LOCAL",
    "time_fired": "2020-06-28T14:52:51.158678+00:00",
    "context": {
        "id": "c206aa41a18a44a7a2fd69568ba8b0bc",
        "parent_id": null,
        "user_id": null
    }
}

so it would appear the correct event to trigger on is deepstack.object_detected, NOT image_processing.object_detected

YUP. that did it!

Jeff

1 Like

@Craigs and @Danbutter
I was where you guys are a month ago. I ended up with this card that shows literally everything that changes as deepstack goes to work to help me learn and watch as things move in front of my cams.

The “Tensor” is the deepstack entity and the tap action of the object in Lovelace is “more info”. They all change in real time so you can see what happens. Tap tensor and you get a summary of everything that was detected (or not).

cards:
  - cards:
      - entity: binary_sensor.is_gate_open
        hold_action:
          action: more-info
        icon_height: 67px
        name: Trigger Gate to Open
        show_icon: true
        show_name: true
        tap_action:
          action: call-service
          service: script.1586228141488
        type: button
      - entity: counter.detection_counter
        name: Detections
        type: entity
    type: horizontal-stack
  - aspect_ratio: 66%
    camera_view: live
    entity: camera.frontgate_detector
    hold_action:
      action: none
    image: ''
    tap_action:
      action: more-info
    type: picture-entity
  - cards:
      - cards:
          - entity: binary_sensor.gate_open_sw
            hold_action:
              action: more-info
            icon_height: 25px
            name: Gate Reed SW
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
          - entity: binary_sensor.frontgate_detector_motion_detected
            name: Motion
            type: entity
          - entity: input_boolean.gate_open
            hold_action:
              action: more-info
            icon: 'mdi:door-closed'
            icon_height: 20px
            name: Opening?
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
          - entity: image_processing.frontgate_object_detection
            hold_action:
              action: more-info
            icon_height: 27px
            name: Tensor
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
        type: horizontal-stack
    type: horizontal-stack
type: vertical-stack

3 Likes

I’m having trouble seeing any object detected events even though the state of the detector shows the objects I’ve targeted. Also I’m running multiple cameras, so I’m trying to build a push notification triggered by each one, this is the trigger I’m using:

- alias: Detect person at front door
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        entity_id: front_door_object_detector
        object: person

I couldn’t find any examples using entity_id. I have the rest working, as in I can trigger the image scan, see the processing, and results correctly on front_door_object_detector but no event fires

Hi @robmarkcole - am I right in thinking that the tflite system you developed for the Pi only does face detection not recognition? Would you recommend deepstack for recognition?

Thanks.

@lockytaylor I recommend deepstack full stop. tflite is just a side project.

@dusing try dropping entity_id filter and see what comes through

Thanks @robmarkcole. Just to confirm, Deepstack is compatible with both the NCS and NCS2 stick? Is the NCS2 better or just a new version?

Sorry all good - read a bit more and it seems it is fine with the NCS2 - I’ll get that. Thanks for your help @robmarkcole

Happy to announce there is a new release of deepstack with improved object detection, see the full post here. Alternatively just pull one of the following:

deepquestai/deepstack:gpu-x3-beta

deepquestai/deepstack:cpu-x3-beta

@lockytaylor get the NCS2

2 Likes

@robmarkcole Do you see it possible to get a version for raspberry pi with docker?

I think that is on the roadmap, once deepstack is open source I imagine this will happen quite quickly

1 Like

We’ll be alert, thank you @robmarkcole for your great work

1 Like

Is there a trick to updating the noavx version to the new beta? I pulled the docker image, set it up properly (or at least I think I did). I can access it from the URL at ip:5000, but get a 401 error when using the Deepstack UI container.

edit: Scratch that. Detection in HA seems to be working, might just be an issue with the UI container.

I ran the UI with it fine. You are not ‘updating the noavx’ version but pulling a completely new image

Thanks @robmarkcole - that’s exactly what I did! Looking forward to getting it!