Face and person detection with Deepstack - local and free!

Thanks, we have this in our plans. Except for the rasberry pi docker version coming in November, i can’t give a timeline for the NCS and Google Coral Support.

We plan to release a version for the nvidia jetson before the year runs out.

Note we just updated the deepstack image today again to fix a bug when no detections are found.

run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x5-beta

Adding my other replies here as the forum prevents me from making further replies in the meantime.

Hi @alpat59, Thanks for your interest in the rpi version. The images we just announced are desktop based. The rpi version is coming in November and will have both ncs and non ncs versions

1 Like

I am using DOODS right now with quite good success on detecting people from the security cameras in my garden.

However, I would like to detect individual faces to rule out family members from the detection results, how good results are people getting in face identification from security cameras with this setup?

1 Like

@ johnolafenwa great news… ! I’m waiting from a long time for a RPi4 docker version cpu-only (without NCS and similar accelerators). Are the images you announced exactly what I’m searching for?
Thank you

1 Like

I’ve tried running the latest version of Deepstack that was released in the past few days, but i’m getting the following:

Logger: custom_components.deepstack_object.image_processing
Source: custom_components/deepstack_object/image_processing.py:254
Integration: deepstack_object ([documentation](https://github.com/robmarkcole/HASS-Deepstack-object))
First occurred: 6:10:43 PM (4 occurrences)
Last logged: 6:31:05 PM

Deepstack error : Error from request, status code: 401

At one point I ran a comparison of this running on a pi4 with and without the coral stick. Inference times were significantly longer without. 1-2 seconds with the Coral and 8-10 seconds without. My use case is a gate with cars, people and trucks so I ended up ditching the Pi for an i5 NUC, and using Rob’s deepstack rest server on the NUC to do the processing. Works like a charm. If inference times aren’t that important then by all means do it without the Coral. I just didn’t want cars/trucks waiting at my gate for 20 seconds before it opens. My gate opens in about 3-4 seconds now after processing all my automation steps.

Jeff

1 Like

I was faced with the same problem. I bit the bullet and powered through to this solution:

It was replicated by another user so it’s solid.
Jeff

1 Like

NM, I resolved my issue. Had to remove the API key as an environmental variable.

1 Like

I’m using this automation but it won’t work unless I launch it manually

-  alias: person detected
   trigger:
   - platform: event
     event_type: deepstack.object_detected
     event_data:
       object: person
   action:
     service: notify.mobile_app_iphone
     data:
       message: "Person"
       data:
         attachment:
           content-type: jpeg
         push:
           category: camera
         entity_id: camera.deepstack_camera

I’d appreciate any help

Try

event_data:
  name: person

instead. Something changed around 0.116 to do with matching fields in event triggers.

1 Like

it works, thank you so much!

1 Like

I’m trying this without luck, maybe it’s because I’m using service: notify.mobile_app instead of html5. Do you know if your automation works with IOS?

Does the notify work when you trigger it manually? I do not have iOS so I cannot verify, but the trigger should be the same. Only difference is the notify part.

I have updated my trigger and notify a bit so I can look into it and share updated version later today.

Hello again @badstraw, just looked over my current config.

This is my alerting automation, now using hangouts. You should be able to adapt this to any notify platform like html5 or pushbullet or something else. According to this: https://www.home-assistant.io/integrations/html5/ the html5 does not work on iOS. So consider changing to another platform. See some information here: https://companion.home-assistant.io/docs/notifications/actionable-notifications/

Hangouts should work on IOS, I think. But not sure. I use hangouts for many notifications and the reason is that it works very stable, its free, and I get a history of all notifications in the “chat history”.

Here is the hangouts notification:

- id: '1601385011168'
  alias: Person Detected
  description: ''
  trigger:
  - platform: event
    event_type: deepstack.object_detected
  action:
  - service: rest_command.blue_iris_flag
    data:
      camera: '{{ camera }}'
      memo: '{{ trigger.event.data.name }} {{ percent }}% confidence'
  - delay:
      seconds: 2
  - service: notify.hangouts_owners
    data:
      title: A {{ trigger.event.data.name }} detected!
      message: 'Confidence {{ percent }}%. Check the NVR: https://nvr.barmen.nu
        for more details and recordings.'
      data:
        image_file: '{{ url }}'
  variables:
    timestamp: '{{ as_timestamp(trigger.event.time_fired) | timestamp_custom("%Y-%m-%d_%H-%M-%S")
      }}'
    entity_id: '{{ trigger.event.data.entity_id | regex_findall_index("\.(.*)") }}'
    folder: /config/www/snapshots/
    url: '{{ folder }}{{ entity_id }}_{{ timestamp }}.jpg'
    camera: '{{ trigger.event.data.entity_id | regex_findall_index("deepstack_object_blueiris_(.*)")  }}'
    percent: '{{ trigger.event.data.confidence | int }}'
  mode: single

This automation is triggered with this automation to do the object detection:

- id: '1601371973424'
  alias: Object Detection Front Door
  description: Check if there is a person at the front door
  trigger:
  - platform: state
    entity_id: binary_sensor.blueiris_front_door_motion
    to: 'on'
  condition:
  - condition: state
    entity_id: input_boolean.camera_deepstack
    state: 'on'
  action:
  - service: image_processing.scan
    data: {}
    entity_id: image_processing.deepstack_object_blueiris_front_door
  mode: single

I have one suck automation per camera.

This is the rest command for Blue Iris:

rest_command:
  blue_iris_flag:
    url: http://192.168.0.22/admin?trigger&camera={{ camera }}&user=puturusernamehere&pw=passwordgoesherei&flagalert=1&trigger&memo={{ memo }}
    method: GET

Replace with your IP and username&password.

This setup does the following. When there is movement on one of the cameras a picture of the movement is sent to deepstack for analysis. If there is a detected object notify (using hangouts in this case). Since I had some issues finding the current file name for the image I used the new variable function in Home Assistant. Using timestamp I am able to get the current filename and attach that to the notification.

If I get a positive person detection I also set up a rest_command to flag the video recording in Blue Iris. It triggers the current camera and flags the video. This way I can filter all recordings in Blue Iris with flagged videoes to only show the ones with persons (or cars) detected.

This is the deepstack_object configuration I use:

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.0.21
    port: 81
    #api_key: mysecretkey
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: True
    # roi_x_min: 0.35
    # roi_x_max: 0.8
    # roi_y_min: 0.4
    # roi_y_max: 0.8
    targets:
      - person
      - car
    source:
      - entity_id: camera.blueiris_front_door

Again, one per camera.

1 Like

Wow, a huge improvement running latest compared to a version of a few months ago! Processing of images has gone from 600ms (+/- 10ms), down to 210ms (+/- 30ms) using Docker on a Windows machine with no other changes.

What’s the difference between the latest and cpu-x5-beta builds?

I notify mobile app on both iOS and Android (separate data: stanzas under the automation action), it works on both, even to paired wearables (Galaxy Watch/Apple Watch S5)

  - data:
      data:
        attachment:
          content-type: jpeg
        push:
          category: camera
          thread-id: '{{ trigger.event.data.entity_id.replace(''image_processing.deepstack_object_'','''').replace(''_'','''')
            }} detection'
        entity_id: camera.{{ trigger.event.data.entity_id.replace('image_processing.deepstack_object_','')
          }}
      message: '{{ trigger.event.data.name }} with confidence {{ trigger.event.data.confidence
        }}%'
      title: New object detection - {{ trigger.event.data.entity_id.replace('image_processing.deepstack_object_','').replace('_','')
        }}
    service: notify.mobile_app_iphone

Hello @mynameisdaniel. Thanks a lot for the feedback. The latest tag always refers to the most recent update, while the specific tags like cpu-x5-beta is the current latest and will never be updated, any new update will be published as a new tag.

So, to obtain the latest version, you can always simply pull the latest tag if you are not clear about which specific tag to run.

Hello David, the face recognition api is quite accurate. And you should give it a try. Check https://python.deepstack.cc/face-recognition

For more improved accuracy, register about 3 - 4 pictures per person you want to filter out.
Would love to know how this works for you

1 Like

Hello @fuzzymistborn, thanks for reporting this. Can you shared the logs from DeepStack when this occured?

I fixed it by removing the API key from my environment variables.

2 Likes

Hello everyone! This is my first post in the forum. I’m new to Home Assistant and Deepstack. The reason I’m posting in this thread is that I’d really like to get Deepstack running on my QNAP TS 470. Currently, I’m running Deepstack on my Blue Iris Windows 10 PC to manage and analyze the 11 security cameras I have for my property.

My apologies for not reading through this massive thread, but I have two asks that I’d really appreciate your help with. I’m hoping others in this thread have a similar setup.

  1. I need help getting Deepstack up and running on my QNAP using Container Station.
  2. I need help getting Home Assistant also running on my QNAP using Container Station.

Thanks!