Face and person detection with Deepstack - local and free!

Thanks so much for a detailed response. I’ll be sure to give all these options a try. Appreciate your help. I am almost positive it will be the resolution problem as I was trying to feedback the captured cropped face images that the face component saves each time a face is detected.

Cheers

I did change to one setup per camera because I needed to have unique ROIs and confidence levels for each camera location.

Thanks! Make sense…

Hi @jamos, it was the file quality issue. Tried some better resolution images and I am not getting that error anymore. Thanks again

Thanks Finagain that helped! I did get it working after simplifying all the names, another helpful person also responded directly.

In the interests of helping others in the future:

#### FILE CAMERA
  - platform: local_file
    name: file_front_door
    file_path: /config/www/cameras/motion-snapshot-frontdoor1.jpg

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.201
    port: 5000
    api_key: <snip>
    save_file_folder: /config/www/cameras
    save_timestamped_file: True
    targets:
      - target: person
    source:
      - entity_id: camera.file_front_door
        name: front_door_person_detector

edited - update to include an image of the object in the message
For those that have this set up and use node red and ios notifications.

I created a templated IOS notification for deepstack events and thought others could use.

The events all triggers from a deepstack.object_detected event. A switch node then directs which object triggered event and send to a change node to capture some of the payload and sets them in the flow with some strings to create a meaningful message on your phone. Requires no changes the function or call service node.

Here is the flow:

[{"id":"1a5277d2.1344c8","type":"tab","label":"Deepstack IOS Notification","disabled":false,"info":""},{"id":"beab4dc5.0a174","type":"server-events","z":"1a5277d2.1344c8","name":"deep stack object detected","server":"2a12269e.94634a","event_type":"deepstack.object_detected","exposeToHomeAssistant":false,"haConfig":[{"property":"name","value":""},{"property":"icon","value":""}],"waitForRunning":true,"x":190,"y":200,"wires":[["1206ad98.130652"]]},{"id":"3cbd2461.d2750c","type":"function","z":"1a5277d2.1344c8","name":"Create IOS Alert","func":"var entity = flow.get('entity');\nvar object_type = flow.get('object_type');\nvar confidence = flow.get('confidence');\nvar msg1 = flow.get('msg1');\nvar msg2 = flow.get('msg2')\nvar msg3 = flow.get('msg3');\nvar msg4 = flow.get('msg4');\nvar ios_target = flow.get('ios_target')\nvar image_url = flow.get('image_url')\n\nvar final_msg = `${msg1}` +`${object_type}` + `${msg2}` + `${entity}` + `${msg3}` + `${confidence}` + `${msg4}`\nvar payload = {\"data\":\n{\n    \"message\": `${final_msg}`,\n    \"data\": {\n        \"attachment\": {\n            \"url\": `${image_url}`,\n            \"content-type\": \"jpg\",\n            \"hide-thumbnail\": false\n        }\n    }\n       \n    }\n}\n\nmsg.payload = payload\nmsg.topic = `${ios_target}`\n\nreturn msg","outputs":1,"noerr":0,"initialize":"","finalize":"","x":1080,"y":200,"wires":[["80a81c05.fbc12"]]},{"id":"80a81c05.fbc12","type":"api-call-service","z":"1a5277d2.1344c8","name":"Notify Target IOS with Message","server":"2a12269e.94634a","version":1,"debugenabled":false,"service_domain":"notify","service":"{{topic}}","entityId":"","data":"{}","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":1370,"y":200,"wires":[[]]},{"id":"593c5bca.f93ea4","type":"change","z":"1a5277d2.1344c8","name":"Set Variables","rules":[{"t":"set","p":"msg1","pt":"flow","to":"A ","tot":"str"},{"t":"set","p":"object_type","pt":"flow","to":"payload.event.object_type","tot":"msg"},{"t":"set","p":"msg2","pt":"flow","to":" was detected in the ","tot":"str"},{"t":"set","p":"entity","pt":"flow","to":"Driveway ","tot":"str"},{"t":"set","p":"msg3","pt":"flow","to":"with ","tot":"str"},{"t":"set","p":"confidence","pt":"flow","to":"payload.event.confidence","tot":"msg"},{"t":"set","p":"msg4","pt":"flow","to":" confidence","tot":"str"},{"t":"set","p":"ios_target","pt":"flow","to":"mobile_app_paul_phone","tot":"str"},{"t":"set","p":"image_url","pt":"flow","to":"https://external_HA-URL/local/snapshots/test/driveway_person_test_objects_latest.jpg","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":750,"y":200,"wires":[["3cbd2461.d2750c"]]},{"id":"1206ad98.130652","type":"switch","z":"1a5277d2.1344c8","name":"","property":"payload.entity_id","propertyType":"msg","rules":[{"t":"eq","v":"image_processing.driveway_person_test_objects","vt":"str"},{"t":"eq","v":"image_processing.driveway_objects","vt":"str"},{"t":"eq","v":"image_processing.garage_objects","vt":"str"}],"checkall":"true","repair":false,"outputs":3,"x":450,"y":200,"wires":[["593c5bca.f93ea4"],[],[]]},{"id":"4d810478.d9b9cc","type":"comment","z":"1a5277d2.1344c8","name":"change you your HA server","info":"This will listen to all deepstack.object_detected events","x":190,"y":160,"wires":[]},{"id":"899123b1.dc364","type":"comment","z":"1a5277d2.1344c8","name":"Edit your image processing objects","info":"create a switch for each image processing object","x":500,"y":160,"wires":[]},{"id":"216d717b.28a1ee","type":"comment","z":"1a5277d2.1344c8","name":"Edit your message and IOS target","info":"Change:\n  entity \"Driveway\"\n  msg1 \"A \"\n  msg2 \"was detected \"\n  msg3 \"in the \"\n  msg4 \"with \"\n  ios_target \"mobile_device_iphone\"\n  image_url your ha url to snapshot from deepstack latest\n  \nThis will send the following message to that phone.\n\n\"A Person was detected in the Driveway with 82.98% confidence\"","x":800,"y":160,"wires":[]},{"id":"ae43c5d7.393d38","type":"comment","z":"1a5277d2.1344c8","name":"No edits needed","info":"","x":1080,"y":160,"wires":[]},{"id":"997d46d2.a4fb58","type":"comment","z":"1a5277d2.1344c8","name":"No edits needed","info":"","x":1320,"y":160,"wires":[]},{"id":"2a12269e.94634a","type":"server","name":"Hingham Home","legacy":false,"addon":true,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":true,"cacheJson":true}]

X-Posted to Flow to send deep stack object notification and image to IOS notification

3 Likes

Hello All,

please can someone assist in my config? I have deepstack running on synology nas on LocalIP:85 which is accessible by home assistant os as they both on same range of IP.

camera platform is taking mjpeg_url from blueiris. Blueiris is running on 192.168.0.4:81
deepstack is running on 192.168.0.5:85

In HA developer tools, the state for image_processing.deepstack_object_entrancesd shows unknown.

Below is my config

camera: 
   - platform: mjpeg
     name: entrancesd
     mjpeg_url: http://192.168.0.4:81/mjpg/entrancesd/video.mjpeg
     username: xxxx
     password: xxxx
     authentication: basic

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.0.5
    port: 85
    api_key: ''
    custom_model: mask
    confidence: 80
    save_file_folder: /config/www/cameras
    save_timestamped_file: True
    always_save_latest_jpg: False
    scale: 0.75
    roi_x_min: 0.35
    roi_x_max: 0.8
    roi_y_min: 0.4
    roi_y_max: 0.8
    targets:
      - target: person
        confidence: 60
      - target: vehicle
        confidence: 60
      - target: car
        confidence: 40
    source:
      - entity_id: camera.entrancesd

Please can someone advise where I’ve made the mistake?

Thanks

EDIT: Ok, i think there is no issue in the config. I tested by calling the service and it detects a person. Now the question, how to make image_processing.scan trigger automatically? do I have to automate this on trigger? what would be the trigger?

Can anyone share how they are using the events to count objects. Like how many cars in the garage. Count up down or stay the same based upon changes when events run.

Thanks

yes.
you need a trigger like a motion sensor.

1 Like

Thanks @TobiasGJ, I’ve put a trigger with motion sensor. When motion is detected image processing happens and saves the image. I’ve put an automation to notify on my android with the latest jpg. Trigger type is state and entity as image_processing.deepstack_object, I’ve put From 0 To 1 for state change but this automation never triggers even when the state is changed to 1. Would you know what have i done wrong here?

hmm
maybe you should post your automation here. don’t really understand what you mean.
motion sensor are typically on/off not 1/0.

Hello all, managed to get face detection installed and working using the default docker run command on GitHub, but was curious about adding Object detection to the same container. Would I issue the command below to accomplish this?

docker run -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

hi Robin @robmarkcole,
I have been using the deepstack no-avx version along with your HA integration component for some time now and recently migrated to deepstack cpu version latest to improve the speed of the service since no-avx version was slow.

on this new cpu latest version from docker im able to successfully run the vision/detection however /vision/face e.g. face detection is not working.

Every time request is stuck and no sign of the request is seen in docker log as well. after long period call would end with timeout error. sometimes it wont even error out without any response.

below are details

  1. Docker on Synology NAS DS218+ with intel Cerelon J3355 2 GH processor and 10GB RAM.
  2. Docker version is 18.09.0-0513
  3. Object detection is working fine
  4. Key is activated
  5. On non-avx version (old setup) both object and face detection are working fine but are slow.

Would you be able to provide any pointers as how it can be resolved? I’ve also anyway posted the same query on deepstack community forum as well.

Thanks

I suggest make calls directly to the API via command line, and see what happens (doing a log trace on the container at the same time) e.g

curl -X POST -F [email protected]http://192.168.1.26:5000/v1/vision/detection

First prove that is working

Yes /v1/vision/detection is working fine which is to detect the objects. I have enabled both VISION-FACE and VISION-DETECTION with MODE=High in docker container…

i tried through a simple python script to find out if it’s related to the Robin’s component or underlying problem with container… and it turned out to be deepstack container problem.

import requests

image_data = open("family.jpg","rb").read()

response = requests.post("http://192.168.1.165:32770/v1/vision/face",files={"image":image_data}).json()

print(response)

it never returns anything for long…while if i use no-avx image of deepstack which is 2 years old without any recent optimization that at least returns the predictions in around 20-30 secs.

i’m facing this issue with cpu-latest version of docker image along with other cpu tags like 2021.01 and beta-8 etc.

Sorry - i misread you post - does face work by itself (without “Object”)?

Just on the version you are using …

The latest I see up on docker is 2020.12 ??

no luck while running with only VISION-FACE.

image

and yes, sorry for typo, below are the versions i tried so far…

deepstack:cpu-x6-beta - face detection not working
deepstack:cpu-2020.12 - face detection not working
deepstack:noavx-3.4 - working with both face and object detection
deepstack:latest - face detection not working

Just a couple quick queries on face training.

Firstly, if I use the tiny face grabs created in “save_faces_folder”, deepstack returns a 400 error and states that there was no face detected. Can I ask what the recommended approach is for training - I’d like to be able to use the images from my camera to train deepstack over time as they come in - and was thinking I could use the faces folder for this. Seems I need to use the full image - but not sure how that would work if there were multiple faces in the frame.

Secondly, is there any clarity on registering multiple face images. Rob, I note you replied to the following post on the deepstack forums but there hasn’t been a reply.

From what I can gather, the only way to register multiple faces is to register them all together, which is not ideal if you want to train deepstack over time as more images come through. You’d have to keep a separate database or directory of faces, and when you add a new one in, upload the full list again.

Thanks

Curious about this, I did something similar but get 1 message for each object detected. Is there any way to change this. A little too many notifications for me? lol

Hey, could anybody help me out please.
I have install Deepstack Face Detection through docker running Debian 10.
I am having issues running the teach face service. I have followed the example in the documentation on how to call the service changing the name of the file of course, but nothing happens. I either get no error in the logs or I have started now receiving this error below.

Has anyone had this error, or can anybody please help me?

Thanks

2021-02-09 16:56:22 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall image_processing.deepstack_teach_face (c:d2d780c173291febacf655ba4374cb51): name=Adele, file_path=/config/www/jack.jpg>

{
  "name": "Adele",
  "file_path": "/config/www/adele.jpeg"
}