Face and person detection with Deepstack - local and free!

Okay, I will see if I can figure out why.
I just don’t get it as I have my Laptop here with me, looking into the library where deepstack saves the latest_car or _person, and every time the image_processing is called, the file updates fine and I see a new latest_xxx gets updated.

The automation only points at that file, so I just can’t figure out why or how the camera have any link to that file, after it has been delivered to Deepstack for processing and then saved in the library :(.

MYbe I should try with dlib or another integration to see if there is any difference.

It is a race condition somewhere, suggest putting in a sleep or 1 second in the automation

@robmarkcole again thank you very much for this component. small question: are you planning to integrate them (deepstack_face and deepstack_object) into HA natively?
I dont claim to understand whats required, but i gathered it’s quite an “endevour” so completely understand if there’s no plan for it.
Thank you again

That is the plan. I have another PR open at the moment, 1 at a time is enough.!

Just got my Coral USB stick and very excited to set this up, thanks @robmarkcole!

I’ll be setting up on a Pi 3 for now, but wondered if you heard anything more about USB3 support on the Pi 4 as I’m hoping that’ll improve the speeds long term.

For people using the coral USB stick, what kind of performance are you getting?

Is it still best to trigger a scan from an outside source like camera motion detection, or is there a better way? Here is my automation that I haven’t messed with since the beginning:

- alias: Detect person at front door
  trigger:
    - platform: mqtt
      topic: 'zoneminder/2'
  condition:
    - condition: template
      value_template: "{{ 'alarm' in trigger.payload }}"
    - condition: template
      value_template: "{{ (as_timestamp(now()) - as_timestamp(state_attr('automation.detect_person_at_front_door', 'last_triggered') | default(0)) | int > 300)}}"
  action:
    - delay:
        seconds: 1
    - service: image_processing.scan
      entity_id: image_processing.front_door_person_detector
    - delay:
        seconds: 1
    - service_template: '{% if (states.image_processing.front_door_person_detector.state | int) > 0 %} notify.all_ios_devices {% endif %}'
      data_template:
        message: "Person detected at the front door"
        data:
          attachment:
            content-type: jpeg
          push:
            category: camera
          entity_id: camera.front_door

I see the new-ish image_processing.object_detected is it recommended to be scanning images all the time now? how often?

It only makes sense to process when a new image is available. If you camera is continually capturing images then setting a short scan_interval is a good idea. Obviously you need the hardware to cope with a high rate of requests

Trying to install the object detection along side the face detection. When I do that I get that the port can not bind and already in use. Do I just need to use a different port then 5000?

I have two questions. First I do not get the snapshot to be saved as a file. I use hassio in a docker on a debian server.

  whitelist_external_dirs:
    - /tmp/
    - /config/www/
image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 5000
    api_key: "LeSecret"
    save_file_folder: '/config/www/'
    save_timestamped_file: True
    source:
      - entity_id: camera.ovan
        name: deepstack_person_detector
- id: '1574369774498'
  alias: Detect
  description: ''
  trigger:
  - device_id: 4920a532e6274dce9283974ea8eebb8a
    domain: binary_sensor
    entity_id: binary_sensor.grinden
    platform: device
    type: motion
  condition: []
  action:
  - service: image_processing.scan

Any ideas? I can see how it finds persons (not perfect though).

I have also tried using my Coral Usb using this docker:

but it sends empty replies on REST. Any idea on how to debug?

@Dayve67 run both models in the same docker command

@Atma-n it is not necessary to wrap the path location in quotation marks, try:

 save_file_folder: /config/www/

Hi @robmarkcole,

I’ve been away for a while and had other things to deal with. I’ve picked up Rel 2.4 and note you’ve changed the file naming convention from one file name regardless of source to individual names related to the source of the captured image.
Previously I had one automation using image_processing.object_detected which would push an image to my phone regardless of which camera detected/created the image. The disadvantage of this method was that I would get multiple notifications dependent on the number of Persons detected in an image.

How do you suggest a Notification Automation for multiple camera sources should be configured for the 2.4 release?

Many thanks,

Nigel

I am sure it is straightforward with templating since the file names have a consistent pattern

As a NUC owner with no mini pcie slot available, is it a good idea to buy a coral mini pcie including a usb3tominipcie adapter or is it better to go for the USB coral which is double the price?
Any thoughts on that?

Personally I suggest trying the mini pcie - its half the price so worth a shot

Hi, Could you fix this issue?

Tbh I am prioritising other work using tensorflow-lite, so dont have the time to maintain this

Hey Rob,
So will this integration not be updated anymore?
If so, are you recommending another approach instead?

Well its open source so anyone can make PR or fork. I personally am using this

Looks promising.
Can you explain why you’ve chosen this one over deepstack?

Currently no tftlite server images for docker or have I missed anything?

Please help me locate the right thread, if this can’t be discussed here.