Face and person detection with Deepstack - local and free!

I am currently trying to build a Docker container on a Raspberry 3B but have not been successful. Would mind sharing your Dockerfile

Maybe this is a dumb question, but what if I want to use both Deepstack-object and Deepstack-face?

I have tried with this so I hope that is correkt:
docker run -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

So finally I tried teaching the deepstack engine with a face, copied a photo to the Ubuntu server with docker and deepstack and then went back to Home Assistant and used the teach face service. When I click call Service - nothing happens.

What should I expect? How can I check whether it actually worked?

Are you seeing anyhthing in the docker logs? For example:

[GIN] 2019/08/03 - 13:15:46 | 200 | 1.349301452s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:46 | 200 | 1.888751449s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:56 | 200 | 756.543856ms | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:57 | 200 | 1.381159845s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:57 | 200 | 1.87328408s | 172.17.0.1 | POST /v1/vision/detection

How about the home assistant logs? Both should show activity if things are working properly.

@robmarkcole Great work on this! It was very straightforward to get object detection running.

Iā€™m having issues getting the state as sensor data, mainly due to my lack of skill with templating.

Iā€™ve tried the following, none of which return the count of person(s) detected:

    person_count_driveway:
      friendly_name: 'Number of people in Driveway'
      value_template: '{{ states.driveway_person_detector.state | int}}'
      unit_of_measurement: 'bodycount'

    person_count_driveway:
      friendly_name: 'Number of people in Driveway'
      value_template: '{{ states.driveway_person_detector.state }}'
      unit_of_measurement: 'bodycount'


    person_count_driveway:
      friendly_name: 'Number of people in Driveway'
      value_template: '{{ states.driveway_person_detector.state }}'
      unit_of_measurement: 'bodycount'

Can anyone shed some light on how to achieve this?

Iā€™d also love to get sensors on the other objects detected as well but am not sure how to pull the json array properly.

I donā€™t see anything in the log that I can see on the Home Assistant webinterface.
I do get a nice count of faces and with the object detection I do also get the persons and all other ā€œknownā€ objects.
So for sure I know that there is connection, I just donā€™t know why it doesnā€™t take my picture, because it doesnā€™t recognize me atfer I run the command teach as described.

Is it possible to teach faces directly on the Ubuntu machine where I run Deepstack within Docker?

@bline79
This should work

person_count_driveway:
      friendly_name: 'Number of people in Driveway'
      value_template: '{{ states.driveway_person_detector.state }}'
      unit_of_measurement: 'bodycount'

It is a quirk of home assistant that a unit_of_measurement is required for a sensor to be plottable as a graph.

@Yoinkz it doesnā€™t matter which machine you train from. If you are training via the teach service then the most likely problem is that home assistant cannot access the image files, check they are in a directory you have whitelisted

Iā€™m not having much success here trying to get this as a usable sensor value.

Capture

Capture2

Is it possible to somehow use the object detected as the unit_of_measurement? ie: person, dog, car etc? In addition to not being able to pull in the state info for the target, I canā€™t seem to get create a sensor for the ā€œall predictionsā€ objects. It would be great to keep metrics of the objects detected for future automatons.

You could break out each of the metrics using a template sensor, with the caveat that only state changes are every recorded by home assistant. Really this is not a fully fledged camera monitoring solution.
Itā€™s strange you donā€™t have any sensor data for your template sensor, what does the main image_procssing.deepstack sensor show? The actual units of unit_of_measurement donā€™t have any meaning to HA

@robmarkcole Hey Rob, that could be the case. Iā€™m just not sure I understand what exactly I should whitelist.
My Home Assistant runs on a NUC and my Docker runs on another Machine running Ubuntu.
When I want to teach the DeepStack AI with a photo, should I then place the photo on the Hassio machine or the Deepstack machine and what dir should I then whitelist :blush:?

EDIT: I whitelisted the directory on my Hass installation where I have saved the photo, restarted HA and then tried the teach command again.
I still didnā€™t get any response and I couldnā€™t find it in the log, but when I tested it by doing processing the picture, it immediately recognized the photo and provided me with the correct name!

Thanks! This is just awesome!!!

Hereā€™s the full state data:

Iā€™m okay with it just recording state changes, that information would be great to see over time. Just struggling with how to get this data usable in sensor form. Iā€™m just not sure where Iā€™m going wrong here.

This is a great component @robmarkcole. Thanks for your work. I am seeing a strange issue. I have done the following troubleshooting:

  1. Set up coral REST server on my rpi4 with coral USB accelerator
  2. Checked the server locally by running curl with sample image. Processing successful.
  3. Tested from my home assistant server (non-localhost). Processing successful.
  4. Added RTSP generic camera like so:
camera:
   - platform: generic
     name: "driveway"
     still_image_url: http://192.168.1.252/snap.jpeg
  1. Added image processing:
image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.131
    port: 5000
    scan_interval: 20000
    save_file_folder: /config/www/deepstack_person_images
    target: person
    confidence: 50
    source:
      - entity_id: camera.driveway
        name: driveway_detector
  1. Restarted HA
  2. I can now see my snapshots in HA. about 1 fps.
  3. Searched thread for similar issue (couldnā€™t find any).

However, even though everything appears to be working the driveway sensor remains in a detection = unknown state and no images are hitting the REST server as far as I can tell from debug. Not sure where else to look to see why the image processor is not sending the snap.jpeg images from my camera to the REST server.

Any pointers would be greatly appreciated.

This addon is not setup to analyse RTSP (streaming) camera data. You need to analyze snapshots.

@ender7 select a suitable scan_interval or use an automation to call the scan service

Thanks, that worked! Didnā€™t know the interval was in seconds. Assumed milliseconds.

I was even able to get rid of my RTSP snapshot streams. The component took input from my ā€˜platform: uvcā€™ Unifi cameras and analyzed no problem. Turns out I didnā€™t need to save RTSP snapshots.

1 Like

Could anyone perhaps share some of their automations when it comes to notifying either on ios or android?

Currently I try to create an automation when a motion detecter is triggered, then the image processing fires both for

- alias: '[Backyard] Android Notification - Motion w. picture'
  trigger:
    platform: state
    entity_id: binary_sensor.motion_sensor_xxxxxxxxxxxxxxxxxxxxxxxxx
    to: 'on'
  condition:
    - condition: state
      entity_id: alarm_control_panel.house
      state: 'armed_away'
  action:
    - delay: 1
    - service: image_processing.scan
      entity_id: image_processing.backyard_face
    - service: image_processing.scan
      entity_id: image_processing.backyard_object
    - delay: 2
    - service_template: '{% if (image_processing.backyard_object.state | int) > 0 %} notify.android {% endif %}'
      data_template:
        message: "(Known / Unknown) person detected at the front door"
        image: 'http://xxx.xxx.xxx.xxx:8123/local/deepstack_person_images/backyard/deepstack_latest_person.jpg?{{now().second}}'

But, as you can see I call both the deepstack_face and deepctack_object. My idea was to sent the notification if the deepstack_object recognizes a person in the picture, but I would like if the face is a known one to be a part of the message.

Does someone have an idea of how to accomplish that?

My own is

sensor:
  - platform: template
    sensors:
      room_persons:
        friendly_name: "People in room"
        unit_of_measurement: 'Persons'
        value_template: "{{states.image_processing.room_main.state}}"
1 Like

Very exciting announcement on the Deepstack forum by @OlafenwaMoses, this is really a game changer

5 Likes

@robmarkcole Great work on this and your other custom components! I got the email from Deepstack this morning announcing the unlimited instances and it sparked my memory to finally get around and ask if this component would work to detect which car(s) are in a driveway? I have a black sedan and my wife has a red SUV. Could this component be used with the camera feed above my garage to detect which of the (2) vehicles are in the drive way? If not this component, either of your other ones? Thanks!

1 Like

@cmille34 you can detect the presence of a car with the standard object detection model. To identify if its a red or a black car, you would need to create a custom model, please see:

2 Likes

Thanks! Would you recommended the Deepstack or Machinebox component for this type of use?

1 Like