Face and person detection with Deepstack - local and free!

Only been using this Deepstack face recognition for a few days and is working well, so great work Rob as this was fairly easy to setup.

Is the correct way to teach multiple images for a single face just as easy as calling the service image_processing.deepstack_teach_face multiple times for different images but with the same Name. As I only seem to get a face match with the last image trained.

I.e. I train with multiple front profile images and it recognises my face but then I call the image_processing.deepstack_teach_face service again using a side profile image and it will only recognise faces with that same side profile and not the front profile I previously trained. Any ideas or is there a different way of training with multiple images?
This is the service data Iā€™m using but with different filenames specified each time.

{
  "name": "Jamos",
  "file_path": "/config/www/tmp/face/facejamos10.jpg"
}
.  Any advice appreciated.  Thanks
2 Likes

My expectation is that more training images should improve the accuracy. I am not sure if side profiles should be included. This is quite an (interesting) technical question. Perhaps we can get a reply from @OlafenwaMoses here, if not best to try the deep stack forums

Thanks, Just deleted the registered face and retrained with front profile images only and is now recognising again with a high degree of accuracy.

Interesting, so the side profile really threw it out of kilter

Hello, getting same error with home assistent 0.99.3 (hassio on Raspberry 4) . Error is also shown when checking config:
Platform error image_processing.deepstack_face - No module named ā€˜deepstackā€™

Copied folder custom_components from github HASS-Deepstack-face to /config. I have no idea whats wrong. Anyone?

Did you add the directory, rebooted and then added the deepstack configuration to your yaml file?

Or did you create the folder, added the config and then tried to validate the config?

This is one of those situations where I donā€™t have any info about if this is an actual issue or just someone not following the correct procedure. @Yoinkz which version of the deepstack custom component are you using? The latest version requires home assistant to install the deepstack-python dependency

Noticed problem was related to only restarting Home Assistent from Settings menu in hassio. I tried installing HACS and got some similar issues. After using hass.io - System - Reboot it was solved. Seems like restarting only Home Assistent in hass.io does not load dependencies.

This was only because I experienced this way back when installed I it the first time.
Havenā€™t had any issues after and also not after I moved to HACS.

Sorry, should have mentioned that.

Iā€™m trying to get this to work on hassio which is being run on Intel NUC through Proxmox. Can someone help me with Portainer on hassio? Iā€™m a total noob when it comes to docker. Where should I post this command?

sudo docker run -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

I am able to create the container without this command and reach local instance of deepstack to paste the api key. However, when it comes to checking if itā€™s ready I get the error below when I paste the next command over ssh:

curl -X POST -F image=@development/test-image3.jpg ā€˜http://localhost:5000/v1/vision/detectionā€™

curl: (26) Failed to open/read local data from file/application

Any help would be really appreciated.

This is probably a dumb question, but how do you actually setup a lovelace card for your camera that overlays the deepstack information like in the GitHub example?

You need to configure save_file_folder and use a local_file camera to display the last saved image

Just released v1.7 of deepstack object which adds a last detection attribute so you can easily see when the target was last seen.

All I am seeking feedback on how people are using deepstack, ahead of making an official integration. Specifically I want to know:

  • Are you using the events? e.g. with counters
  • Is the new last_detection attribute useful? Would it be better if this was the state of the sensor rather than the count of objects?
  • Do people use the save_image feature, and are you happy with the way it is implemented?

Please comment here.
Thanks!

I am using it as an alarm system, the automation is largely a state machine within node red with the scan service driven by motion/line/field detections.

I am very satisfied with the accuracy and responsiveness of the detections. Overall it works well.

  1. I am primarily using the file saved event to push an IOS notification with the relevant file and information passed back.
  2. Have not had a chance to test this out, but would be very helpful. I ended up just printing a timestamp inside the image.
  3. Definitely using the save image feature. The only addition was to trigger a delete of files older than x days.

Other changes I have made to suit my purposes (i get it image processing on limited hardware is not the best strat but these changes feel like great quality of life improvements):

  1. Allowed checking for multiple targets in the same scan (car/truck/person)
    -this was because my host systemā€™s scan speed cannot afford to do multiple scans as this may miss the object.
  2. In the save file event, passed back the bounding box dimensions to filter out whether the object is in an area of interest or outside the boundary. Again with a limited host system the extra cpu from a proxy cam is not worth it. (All detections saved but only those in a certain area will push a notification).
    -This approach is slightly different than other image processing setups where only objects in the zone will be reportable. This allows a review of all detections at the end of the day but immediately notified of important ones.

Yet to do (not related to this): Set up a image gallery for older saved files.

Unfortunately because of my customisations I have not used more recent updates of this tool.

@xdss thanks for the detailed info. RE checking multiple targets, is there a reason why you are not using the image_processing.object_detected service?

Also RE detecting whether an object is within a defined zone of the image, I see this is a common need. I will make it easier to do this

That was one definite path to go down, whilst slightly embarrassed about butchering the code I wanted to have bounding boxes drawn on just objects of interest. It looked like at the time with minimal efforts the draw box function was only triggering on when an image was saved.

The use case for me was an alarm for a narrow driveway, only configuring one target (person) would not pick up cars or vice versa and I care about both (and only if they are inside the property boundary hence the need for zoning). Given time in frame after motion detection and processing two separate scans didnā€™t seem right.

Additionally on one of the cameras a bin would always gets picked up as a suitcase (with high conf) using the object detected without additional filtering would have meant superfluous boxes drawn.

So just tweaking the prediction == target adding in an or prediction == ā€œcarā€ was a quick fix. I was at some stage planning to allow the conf_target configuration to be an array but this would need slightly more changes.

So in fact I am leveraging quite heavily on the save image function and event.

@xdss I just released v1.8 which now exposes the Deepstack predictions as json in a standardised format. This should allow you to determine if objects fall in your target zone. From the release notes:

The full predictions from Deepstack are returned in the predictions attribute as json in a standardised format. This makes it easy to access any of the predictions data using a template sensor. Additionally the box coordinates and the box center (centroid) are returned, meaning an automation can be used to determine whether an object falls within a defined region-of-interest (ROI). This can be useful to include/exclude objects by their location in the image.

Note: In the case that a unique object is present in an image (e.g. a single person or single car) then the direction of travel of the object can in principle be determined by comparing centroids in adjacent image captures. This tracking functionality is not implemented yet in Home Assistant.

Example prediction output:


[{'confidence': 85.0,

'label': 'person',

'box': [0.148, 0.307, 0.817, 0.47],

'centroid': [0.388, 0.482]},

.

.

]

The box is defined by the tuple (y_min, x_min, y_max, x_max) (equivalent to image top, left, bottom, right) where the coordinates are floats in the range [0.0, 1.0] and relative to the width and height of the image.

The centroid is in (x,y) coordinates where (0,0) is the top left hand corner of the image and (1,1) is the bottom right corner of the image.

Hi Rob,

First of all and once again Thank you for a great integration.

Iā€™m also using this is a supplement to my Alarm system.
My video surveillance Ubiquity, is still sending me tons of false positives when it comes to Motion Detections. Often it is just because of Clouds or trees moving. I have followed a lot of guides and even been that lucky to have a Video Surveillance expert to help me tweaking the settings, but unfortunately I do still get a lot of push notifications.

So what I did was implementing your DeepStack, setting my Xiaomi Sensors to trigger the Deepstack face and object detection and then if a Face / Person or Car is identified, then I get a Push Notification through Ariela Android App. This works really well.
When the Motion Sensor is triggered I first call the Face Detection and Object_Person detection. That I do every 20 seconds as long as there is motion. After 25 seconds I do the same for Object_Cars.
The reason why I set a few seconds in between, is to ensure that I donā€™t end up doing several scans at the same time.

I think the last detection is good, because when I swipe the Notification away, then I canā€™t see exactly when the last person / car was detected. I can of course look at the Xiaomi Sensor, but that could have been triggered by other things.

I use the same_image to provide the push notification messages with a picture of what got detected (Face(s), Person(s), Car(s)).

2 Likes

Hi Robin, thanks for getting Deepstack working with HA, itā€™s like magic.
Iā€™m hoping to work out how to do the following:

Detect an object is in view of my camera, the objects Iā€™m interested in are person/car/truck, if one or more of these objects are present, trigger an alert (either text or voice via google TTS), if the number of objects increases or decreases trigger an alert. At present I am doing a basic version of this this with 3 instances of deepstack_object which isnā€™t very cpu efficient. Iā€™ve seen you mention image_processing.object_detected a few times but Iā€™m having difficulty with this in NodeRed (my inexperience using events in Node red probably).

The save_image feature is very useful, Iā€™m using that to display the last person/car/truck in Lovelace, Iā€™d love to have an image which could combine all objects such as a person and a truck (perhaps itā€™s already possible?) and one small improvement might be to increase the size of the probability displayed with the bounding box.

I hope this feedback is useful, and once again many thanks for getting this running.