Face and person detection with Deepstack - local and free!

I have set up a sensor that shows me the time of the last scan. But unfortunately it shows me all the scans. I only want to see the scans with a detected person. What should the code look like? Can someone help me there?

- platform: template
  sensors:
    floor_last_person:
      friendly_name: "Floor last Person"
      value_template: >-
        {{ as_timestamp(states.image_processing.deepstack_object_diskstation_floor.last_changed) | timestamp_custom("%d.%m.%Y %H:%M") }}

Yes, I managed to make it work with Shinobi.

Many thanks for this code. With this code I receive the live cam via push message. Do you have also an example how to receive the latest snapshot of the corresponding cam instead of the live cam?

Yes, you add url to the attachment instead of the entity_id. Also assumes you output the image to a folder called ā€œdeepstackā€, and save files with timestamps. The deepstack_object_detected event doesnā€™t provide the output file name, so I have had to use the time of the event to derive the filename. It works 99% of the time - I get the occasional push without an image.

  action:
  - data:
      data:
        attachment:
          content-type: jpeg
          url: https://my.homeassistant.com/local/deepstack/{{ trigger.event.data.entity_id.replace('image_processing.','')
          }}_{{ as_timestamp(trigger.event.time_fired) | int | timestamp_custom("%Y-%m-%d_%H-%M-%S")
          }}.jpg
        push:
          category: camera
          thread-id: '{{ trigger.event.data.entity_id.replace(''image_processing.deepstack_object_'','''').replace(''_'','''')
            }} detection'
      message: '{{ trigger.event.data.name }} with confidence {{ trigger.event.data.confidence
        }}%'
      title: New object detection - {{ trigger.event.data.entity_id.replace('image_processing.deepstack_object_','').replace('_','')
        }}
    service: notify.mobile_app_iphone
1 Like

is there anyone here who tested this and other apps like frigate? I would like to know if this object (person) detection performs better at night than frigate. I am a frigate user and its pretty blind at night and need to think alternatives as here thieves doing their job at night only

Itā€™s ok but definitely not perfect. During the day iā€™d say itā€™s 95% accurate. At nightā€¦maybe 60%?

1 Like

Hi,

I am having a problem getting the home assistant integration to work. Deepstack works fine and when I do a curl from HA it returns exactly what expected.

But when I run the integration in Home Assistant I get no detection

I canā€™t understand what I am doing wrong :frowning:

My configuration.yaml

camera:
  - platform: local_file
    file_path: /config/www/image.jpg
        

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 5000
    api_key: mysecretkey
    save_file_folder: /config/snapshots/
    save_timestamped_file: True
    # roi_x_min: 0.35
    roi_x_max: 0.8
    #roi_y_min: 0.4
    roi_y_max: 0.8
    scan_interval: 5
    targets:
      - person
      - car
    source:
      - entity_id: camera.local_file
        name: Deepstack_object

DeepStack on Jetson!

Hello everyone. We are excited to share the release of DeepStack GPU Version on the Nvidia Jetson. With support for the full range of Jetson Devices, from the 2GB Nano edition to the higher end jetson devices.

This supports the full spectrum of DeepStack features.
You can run DeepStack on the Jetson with the command below.

sudo docker run --runtime nvidia -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-x1-beta

To run with the face apis, simply use -e VISION-FACE=True instead, for scene, use -e VISION-SCENE=True.

We are super excited to finally bring a stable version of DeepStack that runs on ARM64. We strongly recommend using this over the Raspberry + NCS version, as it is faster, more stable and the Jetson Nano is also less costly than the RPI + NCS combination.

We are working towards full open source release before December, With support for custom detection models and the Windows Native Edition all scheduled for this week.

Thanks for all your feedbacks, we are excited to build the future AI platform with you all.

6 Likes

@fhedstrom in your config you have included api_key but on your curl I can see you do not have it set

Yes. Somehow it works without the API key when doing curl, the integration doesnā€™t.

Here is a screen from the container configā€¦

Hi rob whatā€™s the best way to update a deep stack container?

hi! is there anything new regarding custom models? thanks!

Wowsers super cool :grinning::grinning:
Is face detection only possible on humans? I have at least 8 squirrels visiting my garden and I would love to be able to identify them

2 Likes

Hi need some help pleaseā€¦
Iā€™m running Hass.io,
Done with installing all the components(HACS)/Container(Portainer)/config lines (Configuration.yaml)ā€¦

When I run the ā€œimage.processing.scanā€ service, I get a results in the State of ā€œimage_proccessing.mydeekstack_detectionā€ā€¦

But, I donā€™t get any files saved to the folder.
And was wondering if I need to some how create an automation that will call the Service every while to see if anything was detected?

Plus, when I try the Curl command, Iā€™m getting an ā€œapi errorā€ā€¦?

Please help! :slight_smile:

While looking for a binary_sensor for the deepstack image, I noticed your post. Iā€™ve adapted the code accordingly. But unfortunately it doesnā€™t work as expected. The binary trigger is triggered, but it only drops out very late. This is probably also due to the fact that the image process for this channel is still above the value 0. How could I set up a binary sensor that works well here?

binary_sensor:
  - platform: template
    sensors:
      deepstack_flur_sensor:
      friendly_name: "Deepstack Flur Sensor"
      device_class: motion
      value_template: "{{ states('image_processing.deepstack_object_diskstation_flur')|float > 0 }}"

long time lurker, first time poster

I have tried my best to read through this and canā€™t find the answer to my question.

I want to know if/ how to get this to learn people as they are coming and going and then make an object for said person so I can set triggers based on if that person is seen. Is this a thing? Did I miss it as I was scanning through comments?

The ultimate goal is to greet people and auto unlock the door based on their face and to have a custom greeting per person.

Any help pointing me in the right direction would be greatly appreciated.

The face teaching part is here https://github.com/robmarkcole/HASS-Deepstack-face#service-deepstack_teach_face .

I donā€™t use face recognition, so I donā€™t know if you can use the whole video image for teaching or if you need to manually crop the images first. But I would run the recognition for a while. Then I would have a bunch of images which I would then teach to the AI. But like I said, Iā€™m not sure if thatā€™s the correct procedure.

1 Like

Thx I will definitely take a look at this tonight.

Hi
Iā€™m trying to create a script that will remove old deepstack images to stop the images taking up too much space. Iā€™ve got the below added in my configuration.yaml which works but Iā€™d like to exclude the deepstack_camname_latest.jpg file as I use these files as cameras to see when the last person was detected. Can anyone help with this?

shell_command:
delete_camera_images: find /config/www/deepstack/ -type f -mtime +2 -exec rm ā€œ{}ā€ ;

Replying to myself as someone on Facebook has kindly helped confirmed the command. This will delete all files older than 14 days except the deepstack_camname_latest.jpg file

shell_command:
  delete_camera_images_new: find /config/www/deepstack/ -type f -mtime +1 | grep -v \._latest\.jpg | xargs -i rm "{}"