Face and person detection with Deepstack - local and free!

The only thing that is of real concern is any significant changes to the license, other differences I doubt users would notice

Let me see if i understand this rightā€¦ must i have a motion detection to trigger the image_processing routine?

No you can trigger the service in multiple ways, motion is just an example

How should i do if i want to trigger when camera see a person, car or other object?
You have an example?

I wanted to share what I did in order to get teach face to work when it didnā€™t seem like it was working in the Developer Tools.

I am running this on a VM using VMWare so I have more flexibility it seems with running some commands from terminal. Others will be able to refine this better this is a very basic example. But I ended up writing a script and putting it in the /Config/www folder that I created.

So using the example documents in the DeepStack documentation seen in the link below. I had the following script called Teach.py. You will need to adjust it as needed for your port and path. But this is how I did mine.

import requests
Batman = open("/usr/share/hassio/homeassistant/www/Raw_Image/1.jpg","rb").read()
requests.post("http://localhost:83/v1/vision/face/register",files={"image":Batman}, data={"userid":"Batman"})

I placed like 10 images in the www/Raw_Image folder of Batman and number them 1-10.jpg to make editing the script easier. You can place more than that or less or call them something different its all up to you.

Next from the Terminal I run the following command.

python3 /usr/share/hassio/homeassistant/www/Teach.py

Once that processed I would use the Visual Studio Code editor in HA and change the script for the next image to be 1.jpg, 2.jpg, 3.jpgā€¦ etc until I taught it all 10 images.

If you would like you should be able to expand the script and train many faces at once or in a loop this was a very basic example that I did to see if I could get it working.

https://python.deepstack.cc/face-recognition

Is there a general consensus on a reliable confidence level for face recognition (not face/person detection)? And how many images are people using to train a single face? Is there such thing as too many images? Would training with multiple images of the same person but with slightly different hair styles, clothing, etc, help deepstack or hurt it? I cannot reliably get deepstack to recognize me from my front doorbell camera. Iā€™ve trained deepstack using images taken from that exact same front doorbell camera, same angle, same background, but I may only get it to recognize my known face 25% of the time. Iā€™ve even installed a motion-activated spotlight trained directly at the personā€™s face day + night so a dark face/poor lighting isnā€™t an issue. It will usually detect that there is in fact a face though. Just wonā€™t detect that itā€™s actually me.

Edit: wait, can confidence for face recognition even be adjusted? I see it as an option with a default for the separate person/object detection component, but not seeing it as an adjustable option for face recognition.

@ManImCool deepstack is using mobilefacenet architecture, which you can read about it https://arxiv.org/abs/1804.07573 I Havenā€™t got detailed answers to your questions yet, but generally more images are better - your face photo is compared to all the uploaded reference images so the more reference images the better the chance of a good match

1 Like

Just curious, but has anyone else found deepstackā€™s object identification ability to be a little underwhelming? Before now Iā€™d only tried Rekognitionā€¦ but Iā€™ve had HA auto-scanning 4 exterior 1080p cameras for the last few days, and its identification of well lit and relatively straightforward objects has been inconsistent at best. It is hard to beat a local image processing solution, but even with -e MODE=High enabled, it just doesnā€™t seem reliable enough for day-to-day person detection. Is Yolo5 expected to significantly improve deepstackā€™s object detection?

image

You can set a higher accuracy threshold to reduce these. Note yolo3/5 are well documented, and are pretty standard in ā€˜state of the artā€™ services

2 questions, and apologies if they have already been answered, I tried searching but didnā€™t have a lot of luck.

  1. I initially got the noavx docker container installed and working. But it turns out my computer supports AVX. I tried switching to the AVX version (ie just using deepquestai/deepstack:latest) but i donā€™t get an activation window when I go to the local deepstack container URL. And when I try to run the test API call I get an error that the activation key is invalid.


    Red boxed part is missing

  2. I see that the coral stick is unsupported and that the Intel movidius is preferred. Does this work in Docker or just a RPi?

@fuzzymistborn try the image linked in https://forum.deepstack.cc/t/deepstack-beta-release/325

The deepstack guys have been busy but I havenā€™t had many updates recently so dont want to comment on supported hardware going forward

Iā€™ll give that beta a go.

And thanks re hardware. Itā€™s not a huge deal as I only am processing 2 cams most of the time so was more just curious than anything.

Hi, iā€™m very happy with this Deepstack setup. I works perfect, especialy when using a seperate motion detector sensor in stead of the motion detection in the camera. With a seperate motion sensor you can record an camera image always on the same spot, so the face is spot on in the picture.
I stil have a question:
I want to automate the teaching part. Everytime it detects a face and it an unknown face, or it doesn;t match the knowing faces, i want to hit a button (input_boolean) to teach the last picture.For now i want a hardcode name in it, but in the future i want to make it variable in lovelace.

When using the developer tools / services, it works perfect:
{
ā€œnameā€: ā€œmyloveā€,
ā€œfile_pathā€: ā€œ/config/www/deepstack/motion2facedetection/motion2facedetection_latest2.jpgā€
}

But i canā€™t get it to work in automation. I can;t find documentation to recode to automation. Can somebody help me with an example ?
That would be great!

thanks.

it works great , but how can I make the recognition process faster?
I would like to have a more realtime detection. Is there a solution?

currently I have motion sensors that trigger an analysis, I would like to trigger it much more is there a workaround?
every second would be great. but unfortunately he canā€™t keep up with that.

@D0doooh If you can sacrifice accuracy for speed checkout https://github.com/robmarkcole/tensorflow-lite-rest-server

1 Like

Hi I have install Docker, Deepstack, and Config YAML on home assistant but i have an error
on HTTP connectionpool ā€¦ connection refused. My home assistant is installed on RASPBERRY and has IP adress
192.168.1.xx port : 8123, and my deepstack container has IP adress : 172.17.0.1 and port 5000
When i write localhost:5000 on my search bar, i have main page to deepstack cc with actived session. I m lost !
Have you an idea ?
Regards
Giloris

Iā€™m reading the readme on github, and it recommends 8GB or ram, but I did not see anything about CPU itself. I just got a cheap used NUC (Intel NUC6i3SYH, i3-6100U, 8GB RAM DDR4, 480GB SSD) and am planning on doing a fresh docker install of HA, OpenZwave, and a few others. The NUC will be 100% dedicated to HA though. I have have to Cameraā€™s doing object dectection, (1080p), will this NUC be good enough to return fairly fast image processing times?

Hi i installed Docker and Deepstack container and it s runs.
But i http://localhost:80/v1/vision/detection dont work i have an
404 page not found. I used every deepquestai/deepstack versions !
Have you an idea ?
Regards
Giloris

Hi,

not sure if this is the right thread about but I will give it a try.
I am running HASS-Deepstack-object as a custom component in my HA install and RPI Coral rest server (with a Coral stick) on RPI3 (just seeing now I should move to this https://github.com/robmarkcole/tensorflow-lite-rest-server)

Anyway inspecting my network with Wireshark I see obviously data exchanged between my HASS machine and RPI3 (4 cameras) and a good amount of TCP errors (both RPI and HASS box are wired)

above you can see bars as TCP errors, while lines are traffic from and to the HASS Box and RPI3 running coral.

Hi All,

I am running a RPi4 with NCS2 running DeepStack, the resultant bounding boxes on the pictures seem to always be offset towards the top of the image.

Does anyone else have this issue?

When I run the same camera image from home assistant through a docker version of Deepstack running on my mac the boxes appear correct, as below:

Thanks!
Paul