Face and person detection with Deepstack - local and free!

No I know, just used a short hand term but ok.

I just tested the cpu beta, but it is using around 2 seconds to process every image, while the deepquestai/deepstack:latest image is around 850 ms. Anyone experiencing the same, or am I doing something wrong?

Haaa iā€™m going mad :slight_smile:

when i test with

bash-5.0# curl -X POST -F image=@deepstack_salon_latest_person.jpg 'http://localhost:5000/v1/vision/detection'
{"success":false,"error":"Incorrect api key"}

i launch the container with this:

docker run -e VISION-DETECTION=True -e API-KEY=xxx -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

and configuration.yaml:

image_processing:
  - platform: deepstack_object
    ip_address: 127.0.0.1
    timeout: 30
    port: 5000
    api_key: xxx
    scan_interval: 600000000 # Optional, in seconds
    save_file_folder: /config/www/
    save_timestamped_file: False
    targets: 
      - person
      - cat
    source:
      - entity_id: camera.honor8_cuisine
        name: deepstack_person_detector

without api_key in configuration file and docker command, itā€™s a tiemout

So, what is the problem !?

EDIT:
it works with deepquestai/deepstack:cpu-x3-beta, i donā€™t know whyā€¦

@mLaupet my own testing of the cpu-x3-beta image is that processing takes rougly twice as long as with the noavx image. This is expected as the beta is not optimised yet. The purpose of the beta is to identify any significant issues in production, hopefully none arise! In the meantime the cpu-x3-beta provides improved accuracy

Hello.

I am running on linux with 8GB of memory.

I installed custom and deep, executed it with the following command:

docker run -e VISION-FACE = True -v localstorage: / datastore -p 5000: 5000 --name deepstack deepquestai / deepstack

I added the image, and ā€œ/ v1 / vision / face / registerā€ appears

However, when performing the SCAN it is always ā€œunknownā€.

Does anyone know what it could be?

image

Ignore it, it worked. It is unknown when there is no person at that moment, I thought it would change to 0.

I saw that it is in HACS, so in HassOS / Hass.Io is it already possible?

great job.

Cool stuff! Canā€™t wait to get started! I have 6 rtsp wyze cams and will grow to 15. My plan was to install Blue Iris on a 6 year old Core i3 Windows 10 laptop with 8 GB ram (cheap laptop when I got it) and using Deepstack for processing motion events on each camera in realtime to determine if there was a person detected and if so getting an event fired. I would also prefer to have facial recognition as well. Blue Iris will also be my NVR, so there will continuous 24x7 recording of them. Would I be better off:

a) Running Deepstack alongside Blue Iris on the same laptop, or
b) Running Deepstack alongside Blue Iris on the same laptop but adding an Intel NCS to, or
c) Buying a dedicated RaspPi 4 with an Intel NCS solely for running Deepstack or
d) Forget about itā€¦canā€™t be done without a much beefier setup. Max # of cameras would be [please fill in].

Thanks! Nice work!

I doubt 15 wifi cams will do well with any type of nvr.
You should buy Poe cams.
Running blue iris on a laptop doesnā€™t seem like a great ideaā€¦how will you hook up extra storage? Or wonā€™t you be recording any of your cams?
Lots of people have issues with deepstack running on win 10. Not saying it wonā€™t work, but Linux seems to be preferred.
I think you need to go over to ipcamtalk.com and spend a week in the fourms. You can learn more about everything I said here from that forum.

can you share it as code please?

Hmmmā€¦Ok. The only reason I was going with Win 10 was because it was a Blue Iris requirement. I wonder though if MotionEyeOs is sufficient if I offload the work to DeepStack for person detection and have that be my gateway for deciding if I should get a notification.

I do plan to record but plan to continuously upload to AWS S3. Iā€™ll have a lambda that automatically rotates out the old.

If I put Deepstack of R Pi 4 with Intel NSC, how many simultaneous camera streams could it handle for processing person detection (preferably also with face detection)? Not necessarily at parallel but with keeping the latency of all low (ideally < 3 seconds). Actually though, itā€™ll only process motion events at night time and while I have 2 cats and lots of windows, it should still be manageable traffic. So I think it should be able to handle it correct?

Why POE? Are wifi cams less efficient or are you saying itā€™ll clog up the wifi network? Could I scale by adding a second wifi router at a different channel? How about 6 wifi cams?

Will checkout ipcamtalk.com

The way I use deepstack (and tensorflow) in HA is to use my cameras to detect motion and then call the image processing service via an automation.
I have never tried to do detection continuously. I think that would be quite a load on the computer that deepstack is running on. Not even sure this is made for or capable of that.
Sound more like you want to check out a couple other threads on image processing:

And yeah POE cams are wired and therefore better performance than wifi cams. Keeping as much wired as possible is the recommended method.

A big thank to @robmarkcole for your hard work on this, also the guys at Deepstack.
After some initial issues Iā€™m now running the latest GPU Beta in Docker on Ubuntu, (i3 8Gb RAM, running in the background of my desktop machine at the minute) everything working well, alerts on face detect etc.

The thing Iā€™m now stuck on is how to view previous events in a simple way on Lovelace, preferably some sort of table with timestamp & identity, and ideally where if you click on it you then get the snapshot displayed. Anyone seen anything suitable? Does anyone have any better ideas?

1 Like

I am not aware of a UI like that, would be a pretty substantial software engineering task. One idea I had was to sit a proxy between HA and deepstack, and use that to capture all events and images, and have that build a timeline. However front end is not my area of expertise, an example UI is here

@robmarkcole Hi, I just stumbled on your project and it looks amazing. I have a few questions if you donā€™t mind:

  1. I run home assistant in a virtual machine in proxmox. how would I go about adding your module there?
  2. If your module requires a separate machine, can I install it in a virtual machine in proxmox?

Thank you

I have my HA and Deepstack working in same NUC with proxmox in 2 separated VM.

I am not familiar with proxmox so cannot comment, if it is using docker you should be fine

Hi
I have an error in HA logs, yet everything appears to work
Every time my automation runs that calls image_processing.scan service I get an error, as per below. However, it does recognise faces and triggers notifications as requested. Any idea whatā€™s causing the error and how to stop it?

Log Details (ERROR)

Logger: custom_components.deepstack_face.image_processing
Source: custom_components/deepstack_face/image_processing.py:212
Integration: deepstack_face (documentation)
First occurred: 16:07:55 (1 occurrences)
Last logged: 16:07:55

Depstack error : Error from request, status code: 400

Automation yaml:

- id: '1594190607624'
  alias: Face recognition
  description: ''
  trigger:
  - platform: time_pattern
    seconds: /5
  condition: []
  action:
  - data: {}
    entity_id: image_processing.face_counter
    service: image_processing.scan

Configuration yaml:

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.100
    port: 5000
    timeout: 10
    detect_only: False
    save_file_folder: /config/snapshots/
    save_timestamped_file: True
    show_boxes: True
    source:
      - entity_id: camera.my_esp32_camera
        name: face_counter

NB slowing the rate the service is called does not affect the errors, it still errors every time (just less often, obviously!) Running the service manually causes the same error.

Anyone any idea whatā€™s causing it?

Thanks in advance

Jonathan

Just moved it to Telegram Bot

Iā€™m banging my head on the wall with this one and Iā€™m sure itā€™s something simple.

Iā€™m trying to use save_file_folder to save the processed image for later use (in a notification ideally) however I canā€™t get anything written to the folder.

Deepstack is installed on docker per readme. Iā€™m able to call manually (via curl) and also Iā€™ve had a correct detection of a car using deepstack_object component from the camera feed, so Iā€™m assuming the issue isnā€™t at that end.

Relevant config is:

homeassistant:
  whitelist_external_dirs:
    - /config/snapshots

ā€¦

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 5000
    save_file_folder: /config/snapshots/
    api_key: <redacted>
    # scan_interval: 60
    save_timestamped_file: True
    #roi_x_min: 0.35
    #roi_x_max: 0.8
    #roi_y_min: 0.4
    #roi_y_max: 0.8
    targets:
      - person
      - car
    source:
      - entity_id: camera.driveway_camera

Iā€™ve tried triggering image_processing.scan from Developer Tools and also via automation

- id: '1595416418089'
  alias: Driveway Motion Detected
  description: ''
  trigger:
  - entity_id: binary_sensor.driveway_motion_190
    platform: state
    to: 'on'
  condition: []
  action:
  - data:
      entity_id: image_processing.deepstack_object_driveway_camera
    service: image_processing.scan

But no luck. Any suggestions on what Iā€™m missing?

Edit: I see in some of the older screenshots that save_file_folder is an attribute of the entity, however I canā€™t see that on mine. Not sure if thatā€™s relevant or if itā€™s been dropped in later versions.

Likely either a permissions issue or there is no image to be saved