Face and person detection with Deepstack - local and free!

Hi! Any updates?

well - quite the rebuild - GOTTA LOVE DOCKER, made it So much easier.

Now running a separate PC with a 1080Ti in it, and using portainer to manage docker on the older synology NAS and PC :slight_smile: #winning.

Anyway, after some serious faffing with CUDA versions / Nvidia toolbox, Nvidia drivers - now running deepstack with a GPU and processing time is about 2 seconds.

Hopefully the Synology dva with the GPU will be released soon :slight_smile:

1 Like

I was playing around with the Docker container a bit. I noticed that even when idle (not doing an API calls to the server to recognize objects in the image), the redis process in the container seems to be chewing up some CPU time. Not really sure why it ought to be doing this on a continuous basis, but it would be nice if it wasnā€™t busy all the time for no obvious reason.

As thereā€™s supposed to be a new version released ā€œsoonā€, I didnā€™t really go chasing after thisā€¦

I updated the deepstack docker this morning. now its using way more cpu that it was before just idle waiting for the scan. I turned it off.

I have been watching zmeventnotification add on for zoneminder and it looks like he is added image processing.

Hmm I would have thought the GPU version would using a 1080Ti would be much faster than that.

I am using a i5 9600K and getting a processing time about 1.4 seconds.

Has image processing improved?

It probably is, but that was me just watching the HA dev panel, but it definitely improved from the 7 seconds delay I had to build in before.

Whatā€™s the best/definitive way to test the speed?

Hi just pulled the latest code, how can I restart it (without rebooting?

buntu:~$ sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 5002:5000 deepquestai/deepstack
docker: Error response from daemon: driver failed programming external connectivity on endpoint thirsty_pascal (b38d8c99968b6bfd4f2ffb83c): Bind for 0.0.0.0:5002 failed: port is already allocated.
ERRO[0000] error waiting for container: context canceled

You need to stop the DeepStack and re-initiate it. use ā€œsudo docker psā€ to get the id of the running container and run the command ā€œdocker container stop container_idā€ to stop DeepStack.

Then you can run the DeepStack start command again.

1 Like

Thanks I did it. I noticed this message in the log

Visit localhost/admin to activate DeepStack

Do I need to do something else?

1 Like

Yes. You need to visit DeepStackā€™s Local dashboard to activate the AI Server. Just visit localhost/admin or localhost:port/admin. You will be prompted to register for an activation key. Follow the instructions to get the key and enter the key once you obtain it and have it delivered to your email. Visit the documentations below for all the updates on this.

https://deepstackpython.readthedocs.io

https://deepstacknodejs.readthedocs.io

https://deepstackcsharp.readthedocs.io

Hello, thanks for trying out DeepStack.
On a 1050, DeepStack processes images in Miliseconds.

Note that for DeepStack to access the GPU, you need to explicitly allow GPU access from Docker.

Your run command should be as below.

*sudo docker run --rm --runtime=nvidia -e VISION-DETECTION=True -v localstorage:/datastore *
-p 80:5000 deepquestai/deepstack:gpu

The " --rm --runtime=nvidia " allows docker to access the GPU

1 Like

Hello Everyone, thanks for all the great work and feedback using DeepStack.

We have released a new update with the following key changes.

Improved Face Recognition and Detection: The face detection apis now works well even on occluded faces, face recognition now returns confidence of over 0.7 on recognized faces.

Speed Modes: DeepStack now feature 3 speed modes, ā€œLowā€, ā€œMediumā€, and ā€œHighā€ allowing you to trade off accuracy and speed on different hardware configurations.

Below is an example run using the High mode.

sudo docker run -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore
-p 80:5000 deepquestai/deepstack

We have also added a min_confidence parameter for Face Detection, Face Recognition and Object Detection.

Face Match API: The all new face match api allows you to compute the similarity score of two faces. Sames faces returns similarity of over 0.7

On a sidenote, Gender Information has been removed from the face detection results and The Traffic API has been removed as well.

The final production ready release will be out in the next few days with focus on performance improvements, api security, backups and custom models.

DeepStack now requires getting a free activation key the first time you run it.

See release notes below for more details.

https://deepstackpython.readthedocs.io/en/latest/releasenotes.html#deepstack-beta-release-notes

2 Likes

wish for a distinction between back and front (person detection)

@Klagio presume thats so you can identify people entering or leaving a doorway?

yes, thatā€™s the idea

1 Like

Hi, i have configured this component along with deepstack docker on my synology nas.

I have also made it recognize 3 faces which i checked running some python scripts to test are successfully recognized.

In HA though, it is able to recognize both person and face objects (currect numbers) however matched faces is always blank. Not showing the name of the face recognized.

Any idea?

Im using latest version of Hassio on RPi3 and foscam camera.

1 Like

I am trying to delete faces. The purpose is to re-train the modelsā€¦ How can I do this?

I am also having very slow responses (16 seconds). I have setup my docker container in an old laptop I had with 8gb ram and a decent cpu (2 core @2.7ghz) Any ideas why it is so slow?
I tried using -e MODE=low but the fastest I got was 9 seconds.

I tested deepstack on my windows machine with quad-core i7 about a month ago but I had 0.3 sec responses and I was very impressed. Now this laptop gives me this slow responses and I feel like it should be fasterā€¦

Thanks for the contribution itā€™s amazing!!!

Iā€™m running the HA on PC with 32GB RAM, Core i7 6 cores (12 threads),
Main OS is Ubuntu Server 18.04 with VirtualBox, for HA I created a VM with 4 Cores (8 threads) with 8GB of RAM, response time is about 3.9 seconds in averge.

EDIT: Using Mode=High

I set the scan_interval to 5 seconds (to avoid warnnings) and it worked almost perfectly, response time was great, but it crashed after 1 hour every time,
I check the code inside the docker and found out there is a piece of unprotected code at app/intelligence.py - line 157, after wrapping it with safe block (as in all other flows) itā€™s working for more than 24 hours.

In terms of RAM, I sent 6 images for the component of 6 different persons, scanning every 5 seconds, it uses ~1.2GB of RAM.

Since the docker is provided not as open source, didnā€™t find anywhere to post that information but here.

Another weird issue, I changed the VM to work with all the cores and threads, increased the memory to 12GB and as result from that I got 5.4 seconds response time, any suggestion why it happens?

I manipulated the code a bit locally :slight_smile:,
I can do a pull request if you would like, changes are:

  • Aligned the file structure to HA v0.91
  • Disabled response time in the attribute (moved it to parameter based)
  • Added service to enable / disable writing the response time in the attribute
  • Protected the process_image function and added more logs
  • Called process_faces to trigger the event of EVENT_DETECT_FACE
  • Added more logs for API calls to the docker

thanks again!

2 Likes

Faces:

List all

POST http://host_or_ip:port/v1/vision/face/list

Delete

POST http://host_or_ip:port/v1/vision/face/delete

Headers:
Content-Type: application/x-www-form-urlencoded

Body:
userid=face_id

2 Likes