Face and person detection with Deepstack - local and free!

my GPU is finally here and I will be doing some installation and setup to switch from dlib to deepstack. Looking forward to some test this! I just wish it was not a container installation… as I think it will make the simultaneous GPU passthrough to both HA and deepstack impossible.

when deepstack is open sourced you will be able to run it however you like

1 Like

May I ask what else you would use the GPU for in HA?
I’m planning on using the GPU for deepstack someday…just wondering what else it could be used for.

Not specifically for HA, but I am for example looking at the camera components and have them send livestream instead of snapshots. The main application remains object and facial recognition but I am looking to do so on multiple streams simultaneously.
Further down the road, I may be looking at deep learning for home automation.

Is there some sort of compression happening after adding the bounding boxes? Very difficult to read the percentages… any way to make this more legible? image

It turns out to be difficult to create a function which will correctly annotate with text of appropriate size given the wide variety of shapes and sizes that images come in. Think I will remove that feature. Suggest you use the deepstack-ui to check what that thing is.

Ok, thanks for the quick response

@robmarkcole I’ve been working on this for a couple of days and seem to have hit a wall. Basic problem I am currently trying to solve - no .jpg ever gets written to the directory by deepstack.

  • Running deepstack in Docker container on Ubuntu 20.04.
  • Running deepstackui in Docker container
  • HASS also runs in a Docker container.
  • Installed HASS-Deepstack-object
  • Used your sample test Curl command, Deepstack returns the proper information.
  • Pass a .jpg with the deepstackui, works fine.

docker-compose.yaml:
deepstack:
container_name: deepstack
restart: unless-stopped
image: deepquestai/deepstack:noavx
ports:
- 5000:5000
environment:
- VISION-DETECTION=True
- VISION-FACE=True
- API-KEY=“sampleapikey”
volumes:
- /srv/docker/deepstack:/datastore

deepstack_ui:
container_name: deepstack_ui
restart: unless-stopped
image: robmarkcole/deepstack-ui:latest
environment:
- DEEPSTACK_IP=x.x.x.x
- DEEPSTACK_PORT=5000
- DEEPSTACK_API_KEY=‘sampleapikey’
- DEEPSTACK_TIMEOUT=20
ports:
- 8501:8501

HASS Configurations:
whitelist_external_dirs:
- /config/www

Within a ‘deepstack.yaml’ file:
image_processing:

  • platform: deepstack_object
    ip_address: x.x.x.x
    port: 5000
    api_key: sampleapikey
    save_file_folder: /config/www/deepstack_person_images/frontyard/
    save_timestamped_file: True
    scan_interval: 10
    confidence: 50
    targets:
    • person
      source:
    • entity_id: camera.front_yard
      name: person_detector_front_yard

I have an automation:

  • alias: image processing
    description: ‘’
    trigger:
    • entity_id: binary_sensor.motion_front_yard
      from: ‘off’
      platform: state
      to: ‘on’
      condition: []
      action:
    • data: {}
      entity_id: image_processing.person_detector_front_yard
      service: image_processing.scan

The automation triggers fine, but I would then expect a .jpg to be added to the folder. No image ever gets added to the folder.

What else am I missing?

An image is only saved is there is a valid detection

Of course. I go out in the driveway and stand there and wave my arms, etc. The automation is triggered. I’m certainly a person. Since implementing, lots have people have walked across the area. Automation is always triggered, never any .jpg written. Where do I go next? I’m wondering if my camera feed (Ubiquiti/UniFi) is correct for the service.

Is it that you don’t get any jpg generated or that you don’t get the latest jpg generated?
I get the latest only if I comment out the lines with the ROI from the config. It doesn’t look like you included it so that likely isn’t your issue. Just curious if we are having similar problems here.

I don’t get any jpg generated, ever. So, I’m still trying to figure out if the deepstack service is even being called/used or if I have some other setup/configuration problem.

You can view the deepstack logs to check the requests made.

Is anyone having issues w/ the deepstack_object not being reset after a successful detection?

It is intermittent and sometimes resets after a minute or so, other times it never resets and requires a restart. I test the value of this object in various automations to know when a car or truck is identified or if I’ve already started another automation. This is the syntax I’ve been using. If there is a better way or a way to reset the object to 0 once it’s been triggered, that would work as well.

I’m using a Google Coral on a 2nd Pi, not that it should make any difference…

image

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.251
    port: 5000
    confidence: 80
    save_timestamped_file: true
    save_file_folder: /config/www/snapshots
    targets:
      - car
      - truck
    source:
      - entity_id: camera.frontgate_detector
        name: frontgate_object_detection

and my automation which won’t trigger when there’s motion because the object is still at 1 from the detection 2 hours ago

- id: '1589324257510'
  alias: Front Gate Motion Detection Automation
  trigger:
  - entity_id: binary_sensor.frontgate_detector_motion_detected
    from: 'off'
    platform: state
    to: 'on'
  condition:
  - after: sunrise
    before: sunset
    condition: sun
  - condition: not
    conditions:
    - condition: template
      value_template: '{{  is_state("image_processing.frontgate_object_detection",
        "1") }}'
  action:
  - data: {}
    entity_id: image_processing.frontgate_object_detection
    service: image_processing.scan

when it gets stuck, the value template is always evaluates to 1. It should reset to 0.

This iteration worked, but just barely:

ANY advice appreciated.

The state is 1 until a new state of 0 is recorded. Sensors do not automatically reset to any value, no sensors do this. A better approach is to use the events fired by the integration

Ok, you’ve got my attention. What event is fired when and object is detected? And is there an example of how I can use it?

Edit: I found references to events. this is my test automation. it NEVER fires, even though Deepstack is detecting objects (me = person) and at 98.5% which isover the 85% threshold I set AND it is even saving images with bounding boxes.

  alias: test event
  description: ''
  trigger:
  - event_data:
      object: person
    event_type: image_processing.object_detected
    platform: event
  - event_data:
      object: truck
    event_type: image_processing.object_detected
    platform: event
  - event_data:
      object: car
    event_type: image_processing.object_detected
    platform: event
  condition: []
  action:
  - delay: 00:00:01

Questions:

  • why does the automation not trigger? These should be OR statements, not AND, right?
  • what does ROI stand for? is that the count of objects detected over the threshold?
  • is there a log (not listener) anywhere that would show actual events being fired?

lots to unpack here.

EDIT (hopefully final edit): I set up listening to all (*) events and I found this:
Event 42 fired 7:52 AM:

    "event_type": "deepstack.object_detected",
    "data": {
        "bounding_box": {
            "height": 0.701,
            "width": 0.14,
            "y_min": 0.27,
            "x_min": 0.411,
            "y_max": 0.971,
            "x_max": 0.551
        },
        "box_area": 0.098,
        "centroid": {
            "x": 0.481,
            "y": 0.621
        },
        "name": "person",
        "confidence": 91.016,
        "entity_id": "image_processing.frontgate_object_detection"
    },
    "origin": "LOCAL",
    "time_fired": "2020-06-28T14:52:51.158678+00:00",
    "context": {
        "id": "c206aa41a18a44a7a2fd69568ba8b0bc",
        "parent_id": null,
        "user_id": null
    }
}

so it would appear the correct event to trigger on is deepstack.object_detected, NOT image_processing.object_detected

YUP. that did it!

Jeff

1 Like

@Craigs and @Danbutter
I was where you guys are a month ago. I ended up with this card that shows literally everything that changes as deepstack goes to work to help me learn and watch as things move in front of my cams.

The “Tensor” is the deepstack entity and the tap action of the object in Lovelace is “more info”. They all change in real time so you can see what happens. Tap tensor and you get a summary of everything that was detected (or not).

cards:
  - cards:
      - entity: binary_sensor.is_gate_open
        hold_action:
          action: more-info
        icon_height: 67px
        name: Trigger Gate to Open
        show_icon: true
        show_name: true
        tap_action:
          action: call-service
          service: script.1586228141488
        type: button
      - entity: counter.detection_counter
        name: Detections
        type: entity
    type: horizontal-stack
  - aspect_ratio: 66%
    camera_view: live
    entity: camera.frontgate_detector
    hold_action:
      action: none
    image: ''
    tap_action:
      action: more-info
    type: picture-entity
  - cards:
      - cards:
          - entity: binary_sensor.gate_open_sw
            hold_action:
              action: more-info
            icon_height: 25px
            name: Gate Reed SW
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
          - entity: binary_sensor.frontgate_detector_motion_detected
            name: Motion
            type: entity
          - entity: input_boolean.gate_open
            hold_action:
              action: more-info
            icon: 'mdi:door-closed'
            icon_height: 20px
            name: Opening?
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
          - entity: image_processing.frontgate_object_detection
            hold_action:
              action: more-info
            icon_height: 27px
            name: Tensor
            show_icon: true
            show_name: true
            tap_action:
              action: more-info
            type: button
        type: horizontal-stack
    type: horizontal-stack
type: vertical-stack

3 Likes

I’m having trouble seeing any object detected events even though the state of the detector shows the objects I’ve targeted. Also I’m running multiple cameras, so I’m trying to build a push notification triggered by each one, this is the trigger I’m using:

- alias: Detect person at front door
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        entity_id: front_door_object_detector
        object: person

I couldn’t find any examples using entity_id. I have the rest working, as in I can trigger the image scan, see the processing, and results correctly on front_door_object_detector but no event fires

Hi @robmarkcole - am I right in thinking that the tflite system you developed for the Pi only does face detection not recognition? Would you recommend deepstack for recognition?

Thanks.

@lockytaylor I recommend deepstack full stop. tflite is just a side project.

@dusing try dropping entity_id filter and see what comes through