Face and person detection with Deepstack - local and free!

NM, I resolved my issue. Had to remove the API key as an environmental variable.

1 Like

I’m using this automation but it won’t work unless I launch it manually

-  alias: person detected
   trigger:
   - platform: event
     event_type: deepstack.object_detected
     event_data:
       object: person
   action:
     service: notify.mobile_app_iphone
     data:
       message: "Person"
       data:
         attachment:
           content-type: jpeg
         push:
           category: camera
         entity_id: camera.deepstack_camera

I’d appreciate any help

Try

event_data:
  name: person

instead. Something changed around 0.116 to do with matching fields in event triggers.

1 Like

it works, thank you so much!

1 Like

I’m trying this without luck, maybe it’s because I’m using service: notify.mobile_app instead of html5. Do you know if your automation works with IOS?

Does the notify work when you trigger it manually? I do not have iOS so I cannot verify, but the trigger should be the same. Only difference is the notify part.

I have updated my trigger and notify a bit so I can look into it and share updated version later today.

Hello again @badstraw, just looked over my current config.

This is my alerting automation, now using hangouts. You should be able to adapt this to any notify platform like html5 or pushbullet or something else. According to this: https://www.home-assistant.io/integrations/html5/ the html5 does not work on iOS. So consider changing to another platform. See some information here: https://companion.home-assistant.io/docs/notifications/actionable-notifications/

Hangouts should work on IOS, I think. But not sure. I use hangouts for many notifications and the reason is that it works very stable, its free, and I get a history of all notifications in the “chat history”.

Here is the hangouts notification:

- id: '1601385011168'
  alias: Person Detected
  description: ''
  trigger:
  - platform: event
    event_type: deepstack.object_detected
  action:
  - service: rest_command.blue_iris_flag
    data:
      camera: '{{ camera }}'
      memo: '{{ trigger.event.data.name }} {{ percent }}% confidence'
  - delay:
      seconds: 2
  - service: notify.hangouts_owners
    data:
      title: A {{ trigger.event.data.name }} detected!
      message: 'Confidence {{ percent }}%. Check the NVR: https://nvr.barmen.nu
        for more details and recordings.'
      data:
        image_file: '{{ url }}'
  variables:
    timestamp: '{{ as_timestamp(trigger.event.time_fired) | timestamp_custom("%Y-%m-%d_%H-%M-%S")
      }}'
    entity_id: '{{ trigger.event.data.entity_id | regex_findall_index("\.(.*)") }}'
    folder: /config/www/snapshots/
    url: '{{ folder }}{{ entity_id }}_{{ timestamp }}.jpg'
    camera: '{{ trigger.event.data.entity_id | regex_findall_index("deepstack_object_blueiris_(.*)")  }}'
    percent: '{{ trigger.event.data.confidence | int }}'
  mode: single

This automation is triggered with this automation to do the object detection:

- id: '1601371973424'
  alias: Object Detection Front Door
  description: Check if there is a person at the front door
  trigger:
  - platform: state
    entity_id: binary_sensor.blueiris_front_door_motion
    to: 'on'
  condition:
  - condition: state
    entity_id: input_boolean.camera_deepstack
    state: 'on'
  action:
  - service: image_processing.scan
    data: {}
    entity_id: image_processing.deepstack_object_blueiris_front_door
  mode: single

I have one suck automation per camera.

This is the rest command for Blue Iris:

rest_command:
  blue_iris_flag:
    url: http://192.168.0.22/admin?trigger&camera={{ camera }}&user=puturusernamehere&pw=passwordgoesherei&flagalert=1&trigger&memo={{ memo }}
    method: GET

Replace with your IP and username&password.

This setup does the following. When there is movement on one of the cameras a picture of the movement is sent to deepstack for analysis. If there is a detected object notify (using hangouts in this case). Since I had some issues finding the current file name for the image I used the new variable function in Home Assistant. Using timestamp I am able to get the current filename and attach that to the notification.

If I get a positive person detection I also set up a rest_command to flag the video recording in Blue Iris. It triggers the current camera and flags the video. This way I can filter all recordings in Blue Iris with flagged videoes to only show the ones with persons (or cars) detected.

This is the deepstack_object configuration I use:

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.0.21
    port: 81
    #api_key: mysecretkey
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: True
    # roi_x_min: 0.35
    # roi_x_max: 0.8
    # roi_y_min: 0.4
    # roi_y_max: 0.8
    targets:
      - person
      - car
    source:
      - entity_id: camera.blueiris_front_door

Again, one per camera.

1 Like

Wow, a huge improvement running latest compared to a version of a few months ago! Processing of images has gone from 600ms (+/- 10ms), down to 210ms (+/- 30ms) using Docker on a Windows machine with no other changes.

What’s the difference between the latest and cpu-x5-beta builds?

I notify mobile app on both iOS and Android (separate data: stanzas under the automation action), it works on both, even to paired wearables (Galaxy Watch/Apple Watch S5)

  - data:
      data:
        attachment:
          content-type: jpeg
        push:
          category: camera
          thread-id: '{{ trigger.event.data.entity_id.replace(''image_processing.deepstack_object_'','''').replace(''_'','''')
            }} detection'
        entity_id: camera.{{ trigger.event.data.entity_id.replace('image_processing.deepstack_object_','')
          }}
      message: '{{ trigger.event.data.name }} with confidence {{ trigger.event.data.confidence
        }}%'
      title: New object detection - {{ trigger.event.data.entity_id.replace('image_processing.deepstack_object_','').replace('_','')
        }}
    service: notify.mobile_app_iphone

Hello @mynameisdaniel. Thanks a lot for the feedback. The latest tag always refers to the most recent update, while the specific tags like cpu-x5-beta is the current latest and will never be updated, any new update will be published as a new tag.

So, to obtain the latest version, you can always simply pull the latest tag if you are not clear about which specific tag to run.

Hello David, the face recognition api is quite accurate. And you should give it a try. Check https://python.deepstack.cc/face-recognition

For more improved accuracy, register about 3 - 4 pictures per person you want to filter out.
Would love to know how this works for you

1 Like

Hello @fuzzymistborn, thanks for reporting this. Can you shared the logs from DeepStack when this occured?

I fixed it by removing the API key from my environment variables.

2 Likes

Hello everyone! This is my first post in the forum. I’m new to Home Assistant and Deepstack. The reason I’m posting in this thread is that I’d really like to get Deepstack running on my QNAP TS 470. Currently, I’m running Deepstack on my Blue Iris Windows 10 PC to manage and analyze the 11 security cameras I have for my property.

My apologies for not reading through this massive thread, but I have two asks that I’d really appreciate your help with. I’m hoping others in this thread have a similar setup.

  1. I need help getting Deepstack up and running on my QNAP using Container Station.
  2. I need help getting Home Assistant also running on my QNAP using Container Station.

Thanks!

Can i install VISION-FACE on the same container as VISION-DETECTION Environment. I have VISION-DETECTION to process Blue Iris snapshot on HomeAsssistant Virtual Box… I can increase the CPU core and RAM on that Virtual box… I am running virtual box on Dual Intel Xeon E5-2680 V3 and 32GB Ram for now.

Yes, I have 1 container that handles both.

1 Like

How can I add additional environment into the current container? Should I just add VISION-FACE = True? Or any additional step that i need to take

Just add both enviornmental variables. This is my docker-compose:


  deepstack:
    container_name: deepstack
    restart: always
    image: deepquestai/deepstack:latest
    ports:
    - 5000:5000
    environment:
    - VISION-DETECTION=True
    - VISION-FACE=True
    volumes:
    - ./deepstack:/datastore
1 Like

I have five cameras that I have integrated into Deepstack. The detection works quite well so far. However, if several cameras access the image_processing.scan service at the same time, an error message is displayed. I think I’d have to add a condition in automation to prevent that. The condition should relate to the image_processing.scan service that it is already running. What could this instruction set look like?


Logger: homeassistant.helpers.entity
Source: helpers/entity.py:477 
First occurred: 12. November 2020, 22:00:12 (14 occurrences) 
Last logged: 7:02:37

Update of image_processing.deepstack_object_diskstation_terrasse is taking over 10 seconds
Update of image_processing.deepstack_object_diskstation_eingang is taking over 10 seconds
Update of image_processing.deepstack_object_diskstation_carport is taking over 10 seconds
Update of image_processing.deepstack_object_diskstation_garten is taking over 10 seconds
Update of image_processing.deepstack_object_diskstation_flur is taking over 10 seconds

@robmarkcole Thanks alot for a awesome work.
It works like a charm,