Face and person detection with Deepstack - local and free!

I was able to install and run Deepstack on my RPi 3 without external accelerators with these instructions. The loads were terrible (over 2) so definitely not for any kind of long-term use. Now I have Deepstack running in docker on a Win10 laptop. The cpu load goes to 100 % when an image is processed.

1 Like

Just out of interest, I run the gpu version on an i3-4130t with a GTX1650 fanless gpu.

It very rarely fails to respond even though itā€™s called every second, despite being in use by the kids at the same time - minecraft / YouTube etc.

The gpu is clearly a big advantage here.

As an aside, I think I might switch to tensorflow, as deepstack is looking like abandonware.

Hi,
Iā€™ve added an issue I canā€™t get the image to be opened before sending it to deepstack


I get the error UnidentifiedImageError

Hey, im actually experiencing the same thing as @CarpeDiemRo

I tried curling and it works, i get the success message
In HA however, nothing shows up
I looked at the logs on my docker instance and i see that its working

Hi, I run tensorflow-lite-rest-server on docker and tested using curl command. Working fine. Is there any help to use this on Home Assistant. Any code example for RTSP stream? Thank you.

If I understand correctly, the error is because the images from motionEye return 401 Unauthorizedā€¦
Iā€™m not sure how the python code in image_processing.py would receive the image bytes from the cameraā€¦
When I test the camera url it works fine, not sure how to proceed.

Sorry if this has been asked before, but has anyone made addons for the face and object detection for hassio / home assistant os? Im running a beefy enough home assistant os machine, so running this locally as an addon would be easier than setting it up on an external device. If anyone have made addons, could someone please link to the repos for adding to home assistant?

@aleksander.lyse this thread has a couple of references to using HACS to install this as an addon.

I was thinking more the docker installations. Should be possible to offer auto install of the docker directly. Yes can use portainer, but would be easier to just install. Doods and frigate both managed to offer the install as addons so itā€™s possible.

Also, have anyone wrote an automation or script that when triggered take for example 10-15 frames and analyse for the best confidence within those frames? Sometimes when someone walk the motion trigger when the person is far away, but during a couple of seconds the confidence should raise and get a potentially better hit. Another situation is a door opens, the trigger triggers and deep stack analyse but the person is not out of the door before 2 seconds later and would have no person detection. How have people solved the dilemma like this where the trigger can be unrelated but relevant (door opens, but we want a person trigger that wonā€™t happen before some frames later). For example have a counter of how many individual items detected within 10-20 frames

I used portainer addon and added object detection myself following Robā€™s instructions in this video:
The Hookup

Just fast forward the video to around the 12:40 mark. Iā€™m just learning to use docker myself and it was not that difficult to get going.

No issues with docker or portainer, but would be easier to just have an addon using hassio own addon (docker) system with config, auto update, watchdog and logs. Just so much easier

1 Like

I completely agree. It would be nice if it worked like installing the DOODS addon. I have been looking at the code used by other other add-on developers and I was going try and build something myself.

Hello everyone, thanks for all the patience. Development of DeepStack is fully restarted and we have a new update for the cpu version. It is so much faster and with better accuracy.
The gpu update is coming next week.
Run

deepquestai/deepstack:latest
deepquestai/deepstack:cpu-x4-beta

t supports both avx and noavx.

We have a lot planned to be release before the end of the year. Including support for custom models in the next few weeks.

Thanks.

4 Likes

ā€¦would you also publish a docker flavor for companion units like Intel Neural Compute Stick 2 or Google Coral?

3 Likes

Thanks, we have this in our plans. Except for the rasberry pi docker version coming in November, i canā€™t give a timeline for the NCS and Google Coral Support.

We plan to release a version for the nvidia jetson before the year runs out.

Note we just updated the deepstack image today again to fix a bug when no detections are found.

run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x5-beta

Adding my other replies here as the forum prevents me from making further replies in the meantime.

Hi @alpat59, Thanks for your interest in the rpi version. The images we just announced are desktop based. The rpi version is coming in November and will have both ncs and non ncs versions

1 Like

I am using DOODS right now with quite good success on detecting people from the security cameras in my garden.

However, I would like to detect individual faces to rule out family members from the detection results, how good results are people getting in face identification from security cameras with this setup?

1 Like

@ johnolafenwa great newsā€¦ ! Iā€™m waiting from a long time for a RPi4 docker version cpu-only (without NCS and similar accelerators). Are the images you announced exactly what Iā€™m searching for?
Thank you

1 Like

Iā€™ve tried running the latest version of Deepstack that was released in the past few days, but iā€™m getting the following:

Logger: custom_components.deepstack_object.image_processing
Source: custom_components/deepstack_object/image_processing.py:254
Integration: deepstack_object ([documentation](https://github.com/robmarkcole/HASS-Deepstack-object))
First occurred: 6:10:43 PM (4 occurrences)
Last logged: 6:31:05 PM

Deepstack error : Error from request, status code: 401

At one point I ran a comparison of this running on a pi4 with and without the coral stick. Inference times were significantly longer without. 1-2 seconds with the Coral and 8-10 seconds without. My use case is a gate with cars, people and trucks so I ended up ditching the Pi for an i5 NUC, and using Robā€™s deepstack rest server on the NUC to do the processing. Works like a charm. If inference times arenā€™t that important then by all means do it without the Coral. I just didnā€™t want cars/trucks waiting at my gate for 20 seconds before it opens. My gate opens in about 3-4 seconds now after processing all my automation steps.

Jeff

1 Like

I was faced with the same problem. I bit the bullet and powered through to this solution:

It was replicated by another user so itā€™s solid.
Jeff

1 Like