Face and person detection with Deepstack - local and free!

Not sure exactly why but it’s now working.

I changed two things:

  1. Updated whitelist_external_dirs to allowlist_external_dirs in line with updated documentation. I don’t see that in 0.113 release notes but the documentation seems to have changed at the same time
  2. I have an object to detect, so it’s possible that files are only written when objects are detected. @robmarkcole can you confirm?

Either way, happy camper now!

Yes images saved only when there is a positive detection

1 Like

Do you know how to add an external hard drive to virtual box to use it with deepstack?

I just bought blue iris and I’m planning to use it with a 4TB hard drive…

When we call image_processing.scan, what happens exactly? Does it just take a single snapshot from the camera and look at that one image? Or does it do multiple? Or does it use the camera snapshot url? How does that part work?

I asked ^ because if my motion detection goes off and deepstack only processes a single frame, it may not detect a person (intruder) is in the room as the camera may not see the full body yet. So if that’s the case, I’ll call image_processing.scan once every second until the motion detection clears. @robmarkcole is that what you’d recommend? Or is that already taken care of to a certain extent?

Thanks!

I can detect faces once, but the second time I call the service, I get:

Depstack error : Connection error: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))

If I restart the container, detects well once, and again for the second time and on, keeps giving me the error.

Any ideas?

@johnolafenwa @robmarkcole By any chance, have you managed how to work with custom models?
I’ve followed this one:
https://python.deepstack.cc/custom-models
I’ve created one with Keras, added it correctly to deepstack:

{‘success’: True, ‘message’: ‘model registered’}

, restarted the container, sending a curl call with test image:

curl -X POST -F image=@open_2307-07-48-30.jpg ‘http://10.0.1.201:5000/v1/vision/custom/openlock

response:

{“success”:false,“error”:“Custom vision endpoint not activated”}

I’ve tried to run a new container from deepstack image with additional “-e VISION-CUSTOM=True”,
it does show my model’s api (v1/vision/custom/openlock):

hassio:~$ docker run  -e VISION-CUSTOM=True -e VISION-DETECTION=True  -e VISION-FACE=True -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:cpu-x3-beta
/v1/vision/face
---------------------------------------
/v1/vision/face/recognize
---------------------------------------
/v1/vision/face/register
---------------------------------------
/v1/vision/face/match
---------------------------------------
/v1/vision/face/list
---------------------------------------
/v1/vision/face/delete
---------------------------------------
/v1/vision/detection
---------------------------------------
v1/vision/custom/openlock
---------------------------------------
v1/vision/addmodel
---------------------------------------
v1/vision/listmodels
---------------------------------------
v1/vision/deletemodel
---------------------------------------
---------------------------------------
v1/backup
---------------------------------------
v1/restore

but when I test it with url (as above), it just stuck and not getting any response.

I checked the model appears in models list:

curl -X POST ‘http://10.0.1.201:5000/v1/vision/listmodels
{“success”:true,“models”:[{“name”:“openlock”,“dateupdated”:“2020-07-26T18:05:01.38773756Z”,“size”:5.084896}]}

Do you familiar with this?

@MSmithHA calling scan will process the latest image only. If you want to process multiple consecutive images you will need to call scan multiple times as appropriate

Thanks!

I’ve got everything working except for one thing…my automation is not firing. Based on the post regarding the new events, I think I should be able to add an entitity_id to my event_data, as such:

- alias: Person Detected In The Kitchen While Armed
  condition:
    condition: or
    conditions:
    - condition: state
      entity_id: alarm_control_panel.home_alarm
      state: armed_away
    - condition: state
      entity_id: alarm_control_panel.home_alarm
      state: armed_home
  trigger:
  - platform: event
    event_type: image_processing.object_detected
    event_data:
      object: person
      entity_id: image_processing.kitchen_camera_1_person_detector
  action:
  - data:
      entity_id: media_player.security_announcements
      message: Person Detected In The Kitchen
    service: tts.google_say
  - data_template:
      entity_id: media_player.security_announcements
      volume_level: 0.35
    service: media_player.volume_set

When I look at image_processing.kitchen_camera_1_person_detector, it shows a person was detected, and my alarm is armed, however the automation shows that it’s never been fired. Does entity_id in event_data not work? Or is there a different way I should go about it? I also tried setting the entity_id in the trigger to be my camera (camera.kitchen_camera_1), but still no luck. I have multiple cameras and I want to know which one detected the person.

Anyone know?

Thanks!!!

@robmarkcole any chance you familiar with custom models?

custom models are work in progress

1 Like

Sorry if this is a very noob question, but if I want to utilize both deepstack_object and deepstack_face, do I need to run two separate containers, each with a different port? I currently have deepstack_object detecting persons correctly with port 5000. Adding a nearly identical but new config entry under image_processing but changing it from deepstack_object and deepstack_face, rebooting, and calling the scan service call with the new image_processing.face_counter entity doesn’t work.

you can run the container with both:

 -e VISION-DETECTION=True  -e VISION-FACE=True
2 Likes

@robmarkcole

Environment:
A. RPI4 HA installed IP.192.168.1.130
B. RPI4 DeepStack with NCS2 // Raspbian Buster IP.192.168.1.170

Active APIs:

/v1/vision/detection

---------------------------------------

v1/vision/addmodel

---------------------------------------

v1/vision/listmodels

---------------------------------------

v1/vision/deletemodel

---------------------------------------

v1/vision/setadminkey

---------------------------------------

v1/vision/setapikey

---------------------------------------

v1/vision/backup

Addons:
deepstack object custom integration &
deepstack face custom integration

configuration.yaml

  - platform: deepstack_object
    ip_address: 192.168.1.170
    port: 80
    api_key: xxxxxx-yyyyy-zzzzz-qqqq-oooooo
    save_file_folder: /config/www/deepstack/
    save_timestamped_file: True
    roi_x_min: 0.35
    roi_x_max: 0.8
    roi_y_min: 0.4
    roi_y_max: 0.8
    targets:
       - person
    source:
      - entity_id: camera.outside_ffmpeg

How To:
manually trigger
image_processing.scan
entity_id: image_processing.deepstack_object_outside_ffmpeg

Results:

  1. In HA
  2. In DeepStack Server

save_file_folder: /config/www/deepstack/ is emtpy

Automation

- alias: Image Motion ON Outside DeepStack
  initial_state: 'true'
  trigger:
    platform: state
    entity_id: binary_sensor.hikvision_outside_motion
  condition:
    - condition: state
      entity_id: binary_sensor.hikvision_bucatarie_motion
      state: 'on'
  action:
  - delay:
      seconds: 2
  - service: image_processing.scan
    entity_id: image_processing.deepstack_object_outside_ffmpeg

When the motion sensor is triggered a new entry is created on DeepStack server
[GIN] 2020/08/10 - 17:58:29 | 200 | 70.48µs | 192.168.1.130 | POST /v1/vision/detection
HA status remains “unknown”

Attributes

ROI person count: 0
ALL person count: 0
summary: 
{}

objects: 
unit_of_measurement: targets
friendly_name: deepstack_object_outside_ffmpeg

Posting back here after some time exploring how to do the same thing without docker as I couldn’t wait for a docker free solution… I am curious as to what models deepstack uses. It seems to be running on a keras/tensoflow framework?
I have implemented YOLOv4 with opencv framework as an HA native component and my own face recognition with opencv DNN face detection/dlib encoding and SVM trained classifier for facial determination also as an enhanced version of the HA dlib component (it is both faster and more accurate). I wonder how the performance compares to deepstack models.

I am trying to do some of the teaching for faces. I run the service and I get the following.

2020-08-10 13:52:26 DEBUG (MainThread) [homeassistant.components.websocket_api.http.connection.140664459442544] Received {'type': 'call_service', 'domain': 'image_processing', 'service': 'deepstack_teach_face', 'service_data': {'name': 'batman', 'file_path': 'config/www/image.jpg'}, 'id': 56}
2020-08-10 13:52:26 DEBUG (MainThread) [homeassistant.core] Bus:Handling <Event call_service[L]: domain=image_processing, service=deepstack_teach_face, service_data=name=batman, file_path=config/www/image.jpg>
2020-08-10 13:52:26 DEBUG (MainThread) [homeassistant.components.websocket_api.http.connection.140664459442544] Sending {'id': 56, 'type': 'result', 'success': True, 'result': {'context': Context(user_id='09226ccb1dfd4fa98069462a1087b982', parent_id=None, id='cbfed4bd696f476f8625ca3f365a8ad1')}}

But I never see the log in the Docker ENV and I check the faces list and there is nothing in there. So I am not sure why I am getting True and Success but have it never register.

@CarpeDiemRo everything looks fine, try lowering the confidence in config to 20. Also check there are people in the image. You can also use CURL to check the results from the API as described in the readme.

@rafale77 the model used depends on the version you are running, soon yolo5 will be default

Good to know.
Did you know about the controversy around yolov5?

I think I will start pushing code to the HA GitHub repo to propose my updated dlib integration and yolov4.
My experience with it has been pretty extraordinary, catching people occupying less than 5% of the frame with neighbors walking across a tiny corner of my video stream.

Another choice of interest: PP-YOLO


The only thing that is of real concern is any significant changes to the license, other differences I doubt users would notice

Let me see if i understand this right… must i have a motion detection to trigger the image_processing routine?

No you can trigger the service in multiple ways, motion is just an example