Face and person detection with Deepstack - local and free!

detection is working a treat but for some reason face detection always times out at 1m0s in docker deepstack container?

[GIN] 2021/04/02 - 10:59:40 | 200 | 1.484219782s | 192.168.86.87 | POST /v1/vision/detection
[GIN] 2021/04/02 - 10:59:51 | 500 | 1m0s | 192.168.86.87 | POST /v1/vision/face/

Struggling to work out why?

Only set to face detection not recognition to try free up a few resources!

update:

AVX? maybe my synology intel celeron cpu does not support AVX will check and report back!

Im running deepstack in unraid, object detection is working, but face detection gives the following error, I have tried using it on a blue iris cam and a local image cam.

‘’’’

2021-04-02 09:44:22 ERROR (SyncWorker_36) [custom_components.deepstack_face.image_processing] Depstack error : Error from Deepstack request, status code: 500

‘’’’

just tested on a windows machine face detection works perfectly… back to a linux docker and it fails but object works!

Must be a bug!

1 Like

If you don’t mind if I ask, how much memory is on your window’s machine, and how much is on your Linux machine?

16gb Windows 12gb Synology docker

Thanks. The reason I ask is that I have the same experience with Deepstack not working on my 8GB Docker but does work on my 16G windows machine.

fingers crossed someone spots the issue and resolves it… what cpu are both?

x86 i7. You can follow this, or add to it if you like

I haven’t tried deepstack on windows but:

Dell R710, 40GB ram, plenty of that ram available, docker running on unraid.

Object detection works great, face detection times out or returns error 500 I posted above.

1 Like

I got it working by changing the repo in unraid to the following:

deepquestai/deepstack:cpu-x5-beta

I was experiencing this, I used Deepsrack-ui for training.

But the ability to train faces seems kind of useless without there being an event for “face_detected”

I was hoping to use this to replace my nest hello and use tts to announce visitors, but I have been unable to template out names from matched faces.

Face detection events come through - see example below

  • id: Incoming_Face_Detection
    alias: Incoming Face Detection
    mode: parallel
    trigger:
    platform: event
    event_type: image_processing.detect_face
    condition:
    condition: template
    value_template: “{{ (trigger.event.data.confidence|int) > 75 }}”
    action:
    • service: tts.google_say
      data:
      message: “Pretty sure it’s {{ trigger.event.data.name }}”
      entity_id: media_player.google_home_hall
1 Like

@robmarkcole thanks for an awesome tool, ive been using it a lot and have it integrated into my own alarm right now.

If i can suggest one thing, it would be great if the axes could be skewed to make the select box better suited to images. I have a road that runs diagonal of my camera image and right now it is exactly at the point where i either get false positives from the street, or i miss positives because the axes are too limited and miss people on the driveway. having the option to make a hexagon shape would fix the problem.

Would that be possible in any way? Have a great sunday

Anybody here used the new BlueIris deepstack integration? I’m wondering if I use that to trigger the processing of deepstack directly the component would pull the image based on the event trigger. Basically I’m trying to see if I can mostly cut out the middleman of HASS here having to process/send the image.

Ideal scenario is: 1) BlueIris detects motion, 2) sends it to deepstack, 3) deepstack processes/determines there’s an object, 4) forwards that to HASS which I can then use as needed.

Basically I like to use HASS to weed out some scenarios where I don’t need images sent from some cameras but do for others.

No go for me - I’m running Deepstack on a Jetson, and the new BI only seems to support running on same host as BI …

Looks like it should be configurable to support an external IP address?

Do you have a URL/link to this integration?

“integration” was the wrong word here in HASS land. By “integration” i meant that BlueIris now can directly communicate with Deepstack. It’s in the latest BI update.

I couldn’t find anyone with this issue either here in this thread, or on github, so hopefully someone can tell me what I’m doing wrong.

When trying to register a face with deepstack, I run into timeout errors. deepstack is running, the object detection also works if I use the VISION-DETECTION env with a picture (detects a person). But changing to VISION-FACE=True and then running the python example with the same image, the response is just a failed to process request before timeout.

I also increased the timeout to 3 minutes, still no luck. I can see the face/register post request in the docker logs, but nothing in addition to it.

I can successfully query the api for face/list, which of course returns no faces, since, well, the training of the face fails.

Any ideas why it’s not registering a new face?

Hmm - I shouldn’t have believed what I read (something about even though you could put in IP, it had to be localhost). You are right - can work with Jetson.