Face and person detection with Deepstack - local and free!

Thanks for the heads up. I have reolink cameras and I have been looking at creating privacy mass in that also.

Iā€™ve spent several days trying to get this workingā€¦ Iā€™m thinking I must be missing something simple or just have something not setup right.

I have my HASSIO / Home Assistant setup on a Docker installation running on Ubuntu 18.04 in a Dell 3010 with a Core i5 3470 / 16GB Ram, 14TBytes of storage (256Gb SDD boot drive).

I have deepstack installed in the same Docker as a separate container running with the following startup command:

sudo docker run -e VISION-FACE=True -v /opt/docker_folders/deepstack:/datastore -p 8087:5000 --name deepstack deepquestai/deepstack

Iā€™m using port 8087 rather than the default 5000 as it conflicts with another resource on my server, but Iā€™m able to access the deepstack page with no problem.

I created two test python scripts that execute the example code from deepstack to test face / image registering and testing face recognition and they run perfectly from running the python scripts from the Ubuntu SSH shell.

Iā€™ve installed the deepstack integration in my configuration.yml file using the following:

# Image Processing / recognition
image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.xxx (my server ip)
    port: 8087 
    timeout: 5
    detect_only: False
    scan_interval: 20
    source:
      - entity_id: camera.front_door_cam
        name: face_recog

My front_door_cam is a MotionEye integrated camera from a Raspberry Pi that seems to work with no issues and has integrated into HA with no issues so I donā€™t see that as a problem.

My problems are:
1 - The ā€œimage_recognition.face_recogā€ device never shows any status other than ā€œunknownā€. I have tried triggering the ā€œdeepstack_teach_faceā€ service from the developer tools service tab using the following but it doesnā€™t seem to do anything:

image_processing.deepstack_teach_face
{
  "name": "New Person",
  "file_path": "/config/www/np.jpg"
}

The np.jpg for instance was an image of my wife. Using this method did nothing (no returned state, nothing in the logs).

When I call the service from the Ubuntu shell using my python script calling the REST API directly, it works just fine.

2 - The interface for the ā€œimage_processing.face_recogā€ doesnā€™t show anything meaningful and I have no idea where to get to any detailed / advanced settings / output??? I see a lot of posts showing images and number of faces recognition boxes but mine has nothing like that.

I suspect that my HA is not properly communicating with the deepstack instance running in the docker container. Can anyone point out where my settings may have a problem?

I have the custom_components copied over for the deepstack_face integration so HA is able to integrate the object. Iā€™m not using an API key for HA. Is the location for the ā€œlocalstorageā€ required to be in a special area? Nothing I read says that and I have it in a custom location that I use for most of my docker containers.

I spent some time looking through the ā€œimage_processing.pyā€ file in HA and I get most of it, but Iā€™m confused as to where the ā€œprocess_imageā€ function is tied into HA. Is it automatically called by ā€œimage_processing.scanā€?

Thanks for any insight. I can add more info if needed. I have attached a snapshot of what my HA face_recog looks like Nothing much here.
Capture

Hi everyone. Thank you for the great suggestions you are giving.
Iā€™m working Deepstack (docker version) on a macbook. Itā€™s an acceptable solution (2-3 sec average for every call). I would try on a raspberry (or pine64) board but I havenā€™t NCS2 or similar. Is it possible to deploy anyway a docker version? I tried with some arm-tagged docker image available on docker-hub but without success. Or in alternative is there a frigate - similar functionalities and same HA integration - docker to run on those boards?
Thank you very much in advance for your support

@glilly you need to call the scan service

@alpat59 there will be a docker version for RPI soon

great Robinā€¦ thank you

Hi @robmarkcole,

writing this post as I have an issue with deepstack integration but donā€™t have any clue why.
I had this working fine for month and suddenly stopped: I canā€™t find when/why.

So I run your Coral rest server on a PI and firing:

curl -X POST -F image=@people_car.jpg 'http://XXX.XXX.X.X:5000/v1/vision/detection'

I get the expected output so on PI side everything should work fine.
Port 5000 is reachable from outside so no connection issues.

Nevertheless I get ā€œunknownā€ as sensor value for all my cameras.
This is my config

image_processing:
  - platform: deepstack_object
    ip_address: XXX.XXX.X.X
    port: 5000
    scan_interval: 5
    # save_file_folder: /config/www/deepstack_person_images
    # save_timestamped_file: True
    targets:
      - person
      # - car
      # - truck
    confidence: 80
    source:
      - entity_id: camera.netatmo_corte
        name: google_coral_corte
      - entity_id: camera.camera_ingresso
        name: google_coral_ingresso
      - entity_id: camera.porta_ingresso_live
        name: google_porta_ingresso
      - entity_id: camera.camera_studio_rose
        name: google_camera_studio_rose
      - entity_id: camera.camera_studio_barba
        name: google_camera_studio_barba

and I canā€™t find (maybe I donā€™t know where to look at!) anything in the logs.

any idea?

EDIT: also tried from another PC connected to the network and I can get proper output from Rest Serverā€¦it is really HA canā€™t. Running

curl, etc...

from HA machine I get proper result as well

Hi all
I am personally getting multiple daily requests, on threads and in private messages and github issues, to help people debug connection/config/docker issues. I do not have the bandwidth to help everyone and it is become a real drain on my motivation. Please do not direct such requests to me personally, but just post them here for everyone in the community to assist. However before doing that search the threads, as the same troubles & their solutions are coming up multiple times.
Many thanks

5 Likes

Hi sorry, it wasnā€™t my intention to bother or annoy.

You are doing a great job!

1 Like

Have you tried splitting up your cameras so you have a single camera entity per image_processing entity?

I have tested Deepstack running on:

  • RPi 3b+ running Buster Desktop with the NCS 2 and detections take roughly 1 second

  • QNAP NAS TS-251+ (Celeron with 8Gb of RAM using the novax docker image) and detections were taking about 6-8 seconds.

  • Finally, Deepstack running on a i5 MacBook Air through Docker with 8Gb RAM were taking roughly 2.5 seconds

I have also tried the Google Coral with Tensorflow which was returning predictions in under 1 second however the accuracy was nowhere near as good as Deep-stack

1 Like

Thanks Robin. I actually was doing manual scans but wasnā€™t seeing the results.
I found the issueā€¦ as I suspected it was a problem between the chair and keyboard :roll_eyes: :grin:
It basically was the way I was testing.

Now that itā€™s workingā€¦ Iā€™ve been able to fine tune the facial detection and it works great. Face scan runs in about 2.5 seconds without any acceleration, just CPU. I plan to try the Google Coral USB at a later time.

It did take me a while to figure out how to break out / expose the returned properties of the detection object in HASS. I wound up using a Node Red flow thatā€™s splitting the ā€œmatched_facesā€ object array so I can get the Key values which are the names of the person(s) that were recognized. Wish there was an easier way to get the ā€œwhoā€ from the recognition properties. I then re-combine the names into a single string (if more than one face was recognized) and I fire all of this off in MQTT topics so I can create multiple automations and events that subscribe to the topics without having to modify my Node Red flow.

Seems like there should be an easier way to get that info out of the results. If anyone knows an easier way, Iā€™d be happy to hear about it.

Robin, thanks for making this available in HASS. It works really well once you figure out all the moving pieces.

Hi,

good suggestion, just tried but no luck :frowning:
I tried to downgrade to v3.0 but nothing changed.

Thatā€™s about what Iā€™m getting too on a Ubuntu i5 setup.
Maybe it would be helpful to share what we are all seeing as far as response times with specific hardware.

Iā€™m running the following setup:
OS Ubuntu 18.04
PC w/ Core i5-3470 processor
RAM - 16GB
HDD - 1 256GB SSD / 14TB Spinning
Acceleration - NONE
HASS and Deepstack install running on a Docker along with about 7 other applications on a headless server (no GUI)
Camera(s): 1 MotionEye / Raspberry PI camera (640x480) Resolution.
Image processing (Facial Recognition) response time: ~ 2.5 seconds

I suspect the biggest factors in the response time are the image resolution and the presence or lack of graphics acceleration.

@glilly the events should be used, rather than crawling attributes

Thanks. I think I am firing off of the event when total_faces is something greater than zero.
But, are there events to parse the ā€œwhoā€ was recognized and ā€œHow manyā€ were recognized if any? Iā€™m using that info for specific logic in another part of HA to do certain things depending on ā€œwhoā€ the facial recognition detected.

Itā€™s a WIP so just working on some concepts right now. Iā€™m just happy this is working finally :slightly_smiling_face:

Interestingā€¦ How are you calling the image_processing.scan service?
Through an automation or through Developer Tools?

I have a node-red flow that calls the service when motion is detected by my MotionEye camera setup. The flow then waits for the ā€œtotal_facesā€ to be something greater than zero (or timeout 15 seconds).

@glilly I was wrong, I didnā€™t add that event yet, so much to do, so little time

@robmarkcole I know how that is :sweat_smile: Been doing this type of stuff for 35 yearsā€¦ still fun :wink:
Try not to let it burn you out too much. Take a break when needed. Youā€™re work is greatly appreciated! :+1:

1 Like