Face and person detection with Deepstack - local and free!

By default image processing is performed at the scan_interval completely irrespective of whether the image has actually updated. The image processing integration needs some attention, there are a couple of threads about it in the HA architecture repo https://github.com/home-assistant/architecture/issues?utf8=✓&q=is%3Aissue+is%3Aopen+image+processing

@robmarkcole, Thank you for this nice Component! Looking forward to get it working.
Still waiting on the creation of my account at deepstack - haven’t received a confirmation yet.

One question though: You state with the basic subscription we get unlimited access for the installation:
https://github.com/robmarkcole/HASS-Deepstack-face - does that mean we can get unlimited persons / object detections, but still only teach the system 5 faces then?

I need to update the readme. Please see https://deepstack.cc/#pricing

No worries - I just had to be sure. I just want to play around with it and for now, 5 persons is fine for detecting the faces of the people who lives in the house… What I miss with the current system I have now is that I get a notification on every single Motion Detection. With this I can get notified on faces, persons and objects (a car I guess) :blush:

EDIT: Have anyone else experienced that it takes some time before the account on DeepStack gets created?

I think the account should be very rapid, have you tried the deepstack forum?

I got created on the forum and have left a note there - lets see if anyone replies.

thanks - so I cant figure out what changed, but before the flow consisted of:

  1. motion detection (from shinobi)
  2. take a snapshot of the camera
  3. save the file to a location
  4. camera local_file points to that image
  5. run image_processing.camera_local_file
  6. get state of image_processing.camera_local_file
  7. result if 0 do nothing, if >0 send alert “person detected” with image.

however now when I run image_processing since updating the state of it is always unknown (even when there are 0 people present)

All i wanted to do was grab the new image with the bounding box instead, but cant seem to get it to work now. port & IP is correct, using nvidia runtime with a license for deepstack too.

I assume you only get the image_processing.file_saved event triggered when there is a positive detection?

EDIT: I’m also not seeing any logs in the deepstack docker when I call image_processing

EDIT: Also it seems like after the interval when all image_processing entities are scanned its generating this:
Not passing an entity ID to a service to target all entities is deprecated. Update your call to image_processing.scan to be instead: entity_id: all

unknown indicates HA is unable to reach deepstack, please check your config and use curl or python to check that deepstack is reachable and on if all else fails. Quite likely a docker related issue to do with network or port

1 Like

is there a definitive way to check from HA docker? i can ping the IP of the host ok.

I can access the deepstack webserver on the network from another device.
Thanks for your help by the way!

EDIT: i stopped the deepstack docker and HA logs showed is deepstack running ? which went away once I started the deepstack docker up again. i’m baffled now

EDIT: checked the images are accessible for scanning, network seems fine - still no logs in deepstack though. I assume there werent any changes in 0.96.3 that would impact this?

Please share your deepstack config and your docker run command for deepstack

So far so good. I followed the guide and the webpage is showing. Unfortunately I had a power loss, so my Ubuntu rebooted and then the webpage didn’t show up afterwards. So I looked if the container was still present which it wasn’t (sudo docker container ls) - So I had to run this command again:

sudo docker run -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

Is is somehow possible to keep the container even if the machine reboots and then start the container automatically?

Sorry, I’m really new to all this docker.

I think run-always rather than run in the commandline.

That command doesn’t seem to be found when I run it?

My apologies, memory failure! Try

docker run --restart always

Okay thanks. I tried the following:

sudo docker run --restart always -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

… and it works! :blush:

ok, think I may have solved it, but not 100% WHAT was causing it.
i noticed the version of deepstack:gpu was 3.5.6
there was a later version of deepstack:gpu-3.6

so built with the 3.6 GPU image and back to getting results.
docker command is:

sudo docker run --runtime=nvidia --name=deepstack -e VISION-DETECTION=True -e MODE=Medium --restart always -v /home/<name>/docker/deepstack/data:/datastore -p <port>:5000 deepquestai/deepstack:gpu-3.6

Note that at present the Docker container build for the coral-pi-rest-server is broken. As the changes got merged in most recently, the additions I made to pass the model file and other options got dropped out. I see about merging those back in when I have a chance.

1 Like

I’ve got this running now with the Coral TPU and the new flask server to implement the API! Just a couple of suggestions off the bat:

  • It’d be nice to be able to control weather timestamped images are saved, along with the “latest”. I might now want to have to clean up the older images on a regular basis
  • It’d also be nice to be able to either specify a path for each camera, or to include the camera name in the saved file. If I have multiple cameras going, I think it would be neat to have a file camera set up showing the latest image from each camera. Right now, the same “latest” file is overwritten by each configured camera.

I did build a new Docker container based on a modified coral-app.py file in the flask server that takes options to specify the model. If that seems to be reliable (or doesn’t obviously crash or anything), I’ll get the changes to you @robmarkcole

Thanks again for the great work! Quite an elegant solution having the coral TPU interface emulate the deepstack platform to leverage the came component for both.

1 Like

Thanks for your comments @lmamakos, I now have a couple of issues on the repo to track these. Re Docker I definitely welcome the ability to specify the model. My hope is that we could create a Hassio addon whereby people plug in the coral/movidius and can then use the deepstack integration without needing a beefy computer somewhere & always on. Also we can create models specifically tailored for HA use cases.

2 Likes

I’ll see if I can get conjure up enough git-fu to submit a pull request, or maybe I’ll just send you diffs :slight_smile:

1 Like