Face and person detection with Deepstack - local and free!

Hi all, just trying out the CC for object detection for the first time. Got the setup working and have tested a few image files already. Thanks Rob!!

Do have one question about hardware acceleration… having read through a lot of this thread, there is mentioning of using Intel NCS2 and even Coral; are these supported in “latest” deepstack version?

Hi tennbaum, I’m a newby to this custom component, but thought I would give it a try to help out. Looking at the code, I’m not sure, but this maybe trying to read an image file from your camera and not seeing that file. One thing to try instead is to create a “local” file camera instead of a real camera. This is what I have been doing just to test the detection capability.

camera:
  - platform: local_file
    file_path: /config/snapshots/deepstack_input_sample.jpg
    name: deepstack_input_sample

I place a picture file in /config/snapshots/ and rename it deepstack_input_sample.jpg, and then hit the image processing scan service for this camera entityid and see what happens.

If that doesn’t work, then I would use Curl to send that picture to deepstack and see if you get something returned:

curl -X POST -F image=@deepstack_input_sample.jpg 'http://DEEPSTACKIP:PORT/v1/vision/detection'

hey wmaker (and all)

thanks for picking this up. gr8 idea, i wanted to “test” the server itself somehow, but didnt know how to.
(so thanks)

i added to configiration.yaml: (under my other "camera"s)

platform: local_file
flie_path: /config/www/testimage.jpg
name: deepstack_pic

and now it… WORKS!

so how do i get it to work directly with my cam? or should i just create an automatization to make snapshot and keep sending “local image” files for procesing?

and another thing - though i can see a line registered on the deepstack server, the “image_processing.deepstack_object_deepstack_pic” stays “unknown”.

thats the log:
Depstack error : Error from Deepstack request, status code: 403

I don’t know how HA’s “image_processing” actually tries to get an image from the camera, and may not be able to. So yes the automation you suggested would be the only path to take.

Your getting a 403 status code…is that for the local file camera?
Either way, 403 means the image_processing made contact with “something” (quite possibly the Deepstack server) with the given port and that “something” doesn’t accept requests on that port. When you setup Deepstack container did you use something like 81:5000? If you did, you should be using port 81 and not 5000

Inspired by @robmarkcole i created Home Assistant - Torchserve integration. It has similar configuration to deepstack - creates image processing entities for camera. Beauty is that you can now use torchserve infrastructure instead of deepstack. AWS now even has torchserve images.

I also created torchserve streamlit UI and torchserve Dockerfile to build an image on top of official tochserve where custom models can be trained or YOLO can be used.

in the repository above i included .MAR files for 1280x1280 Yolov5 small object detection model and Squeezenet image classification model. If anyone wants to try - give me some feedback.

1 Like

Hi all, just had a problem where deepstack was recognising objects but the face recognition command was making the container hang for exactly 1m if a face was in the image. Turns out it was a processor issue on the virtual machine (proxmox). In case anyone else ever has the same problem. https://forum.deepstack.cc/t/no-response-from-v1-vision-face-register/233/4

Hi! What would be the pros/cons of Torchserve vs Deepstack?
Thanks.

hey @wmaker and all

for testing purposes I’m running deepstack on my windows machine. when ill get it to work, ill set it up in my truenas as a jail or as a container on a Linux machine or my long overdue proxmox that needs to be reconfigured\rebuild (hopefully that ill be the “boost” that i need to get it done…)

I went over the win installation DOCs for deepstack on windows, but they seems to be lacking…
It seems that port 5000 is “working”, as i can see the lines adding up when i call the image processing service.
p.s
yes, all testing are being done over the “still image entity” you suggested earlier (thanks for that).
so i take it that as a next step i should try different ports…
ill try it and report back.
cheers

Hi all

Two quick questions:

  • If deepstack only processes image when we invoke the service, why is is using CPU while idle? What is is doing?
  • Is there a way to correct a face detection, like sending a detected face and saying its not that person?

Thank you in advance :slight_smile:

Good point about false detections - I’ve just been doing tests, and have trained with 4-5 photos of myself, and 4-5 of my 11year old son. When I test it with either myself or my son, it seems to get it wrong most the time. We do look quite similar, but surprised that it is getting it so wrong.

Then the other day someone pressed my doorbell, and deepstack gave a 79% match for my face - the guy didn’t look anything like me!

@wmaker torchserve from looking at the code is much more actively supported and documented. Half of deepstack docs return 404. Now that it went open source i think it will continue to lose share of DIY model serving market.

More importantly it is a lot easier to train custom PyTorch models and load them to torchserve than do that with deepstack. Deepstack supports ONNX but i had a very hard time getting custom model going - could not get it running really. Yolo object recognition switched to pytorch in v5 so i am not sure deepstack will use it now. Getting Yolov5 object recognition or training custom classification nets on torchserve was easy.

Deepstack face recognition seems to be easier to get going but i found accuracy to be terrible.

@robmarkcole what if we merge our repos into one component that connects HA image processing to either service - my current version is almost drop-in replacement. We just need to align config settings, event naming, which files are saved and how.

Given torchserve is easier to use with multiple models i wanted as a next step to allow HA component to support pipelines where image goes to object detection model that has certain targets and confidence levels and crops can then be sent to image classification model and pipe output as events. This will make things like finding a person crop w/ Yolov5 or Fastrcnn and then using custom/OOTB super-fast image classification model (like squeezenet) to detect a name easy. Yolov5 detects objects very well and I trained couple of pytorch models to classify my own car and my own dogs to drive automations.

@Alex_Pupkin, any experience on running pytorch serve on a Jetson Nano device?

Seeing the same on my end (detection works, face registering times out) using v2021.02.1. It times out after 60seconds and the DeepStack Log entry isn’t posted until the timeout occurs. Bummer :frowning:

@jodur Right now I have a pretty heavy setup with Ryzen 5950x CPU and 12Gb Nvidia Titan Xp. CPU does not matter i think if you build your images on GPU. 1 model worker uses 500-2Gb of GPU memory from what i can tell depending on the model size (fastrcnn which from I can tell kicks yolo easily is about 1.7Gb). Jetson nano seems to have 4Gb so i think it will work fine. I ran it on 2Gb K2200 Quadro before.

Training models will be tighter but Nvidia has Jetson container images and with small batch sizes i think you could train Yolo for object detection (easy to train, you just need to make small changes in export to export the right torch model - i have github repo for that). Image classifiaction models are not that memory intensive.

Hello.
Have been running Deepstack for a while on a NAS where I also have MotionEye with 3 cameras.
Has worked fine with an “image_processing” of 200-400ms for a camera in MotionEye, but then just went up, after adding 2 extra, so I am now between 14-20sec. for each “image_processing”.
Have tried Mode for Medium and Low but it just makes processing time longer.

My question is then:
I am running HA on a NUC, via Proxmox and considering throwing a Deepstack on it too as it is better than my NAS but can I set up a VM with Deepstack on Proxmox? if so, how! have tried searching on google but did not think I can find anything.

As well as what exactly are these two parameters for “image_processing” doing?

`
timeout: 5

scan_interval: 10
`

Very nice integration. It works fast. I use it at my front door. As soon it detects me, I get a notification on my IPhone asking me if I want to open de door. In the future, I’m thinking to make it more transparent in background with three factors : face recognition (just that isn’t enough since you can recognize face with the picture of someone), bluetooth and GPS trackers.

But everything said, now I want to delete all the faces and start it again because I made some mistakes. I don’t find anywhere the solution. Does someone can help me? Thanks!

1 Like

Hi! Is it possible to run the “docker part” on same machine with Home Assistant if it is intel nuc? I use Home Assistant OS, so not sure if I can install and use use docker.
Any help?
P. S.
So I installed portainer, added new container with:
image - deepquestai/deepstack
ports - 83, 5000
volume - /datastore and localstorage-local (which I just created in volumes menu)
VISION-FACE True

And then wrote this in Home Assistant config:

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.14
    port: 83
    timeout: 120
    save_file_folder: /config/snapshots/
    save_timestamped_file: True
    save_faces: True
    save_faces_folder: /config/faces/
    source:
      - entity_id: camera.recognition_camera
        name: face_recogniser

After that I tried to fire a service image_processing.scan and got 500 error:

Depstack error : Error from Deepstack request, status code: 500

And my portainer log:

[GIN] 2021/02/22 - 17:27:37 | 500 |          1m0s |    192.168.1.14 | POST     /v1/vision/face/

And yes, I can access to deepstack’s page on localhost:port

Any ideas what I got wrong?

It may just be me, but most of the ‘faces’ APIs I have tried are simply broken (version 2021.02.1 for CPU). They either return immediately with nothing, or they timeout after a minute. The only one that works for me is face/list which of course returns 0 faces. To answer your first question, yes I have gotten object detection working using Portainer in Hassio (officially Home Assistant).

Hmm, so your deepstack integration was working and now it does not?
But it runs locally on your machine, what could cause the change? If Home Assistant can correctly send a request to deepstack API

Maybe you have tensorflow working too? If so, can you share some details how to make it work?

I’m running 2021.02.1 on my Jetson Nano 2G and it works ok with facial recognition (admittedly I need to train it with some more samples, but it is working). I use Robin’s deepstack-ui for training/testing.