Face and person detection with Deepstack - local and free!

Maybe a dumb question.

If I have multiple cameras, do I need to have multiple folders for files and faces?

platform: deepstack_face
ip_address: 192.168.1.x
port: 80
timeout: 5
detect_only: False
save_file_folder: /config/deepstack_face_snapshots  <----
save_timestamped_file: True
save_faces: True
save_faces_folder: /config/deepstack_face_faces  <----
show_boxes: True
source:
  - entity_id: camera.theus
    name: face counter theus

platform: deepstack_face
ip_address: 192.168.1.x
port: 80
timeout: 5
detect_only: False
save_file_folder: /config/deepstack_face_snapshots  <----
save_timestamped_file: True
save_faces: True
save_faces_folder: /config/deepstack_face_faces  <----
show_boxes: True
source:
  - entity_id: camera.entre
    name: face counter entre

Same folder is OK. The files will be labeled accordingly.

1 Like

Can anyone help with: Error on receive image from entity: Camera not found

I suspect this is a config issue of some sort but damned if I can figure out what I am missing or have wrong.

Configuration.yaml sections:

# Whitelisting for ImageProcessing
homeassistant:
  whitelist_external_dirs:
  - /config/www/cameras
#### FILE CAMERA
  - platform: local_file
    name: camera.file_front_door_TEST
    file_path: /config/www/cameras/motion-snapshot-frontdoor1a.jpg  

(this is definitely accessible via http://hassio.lan:8123/local/cameras/motion-snapshot-frontdoor1a.jpg)

#### IMAGE PROCESSING
image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.201
    port: 5000
    api_key: <secret>
    save_file_folder: /config/www/cameras
    save_timestamped_file: True
    targets:
      - target: person
    source:
      - entity_id: camera.file_front_door_TEST
        name: front_door_person_detector

Developer Tools - Services

Resulting Error:

Logger: homeassistant.components.image_processing
Source: components/image_processing/__init__.py:128
Integration: Image Processing (documentation, issues)
First occurred: 10:02:13 (1 occurrences)
Last logged: 10:02:13

Error on receive image from entity: Camera not found

Anyone managed to get Deepstack working on the pi with NCS? My install is asking for an activation code which Iā€™m not getting on my windows install! Having a right nightmare with it!

Are they going down a docker route for everything?

Thanks for any info!

WD

Hi All
just published v4.1 which adds a couple of tasty new features:

@kernehed yes add an entry per camera

@wonkydog for questions about deepstack itself (rather than the integration) please try the deepstack forum

5 Likes

I seem to be looking in the wrong places for info, but this thread looks right. Am I looking to load a custom model, in .pb format. Following along with https://docs.deepstack.cc/custom-models/deployment/, but I donā€™t see any real mention of what kind of formats are expected. Iā€™ve dropped the model into the mount, named it appropriately, but I donā€™t really get any logging or similar to say itā€™s loaded OK or indeed working.

A call to the EP shows an error which makes me think Iā€™ve got things in the wrong format:

  File "/Library/Python/2.7/site-packages/requests-2.22.0-py2.7.egg/requests/models.py", line 897, in json
    return complexjson.loads(self.text, **kwargs)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 367, in decode
    raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 5 - line 1 column 19 (char 4 - 18)

I remember in earlier versions of DeepStack, you could upload models via URI - I guess thatā€™s not needed anymore?

@robmarkcole First off, THANK YOU! Just the object detection alone drastically brings down my false alerts! With most cameras, you can specify different zones. I like to know if there is a pedestrian in the street vs someone actually entering my driveway (2 zones). To be able to utilize 1 camera multiple times with different zones, I added a zone attribute to your code. Iā€™m new to GitHub so not quite comfortable with pull requests but more than happy to share. Iā€™m new to python so maybe there is a better way to do it but it works for me. For the image_processing entities, I just appended the zone to the name. For example ā€œimage_processing.deepstack_object_driveway_lineā€ and ā€œimage_processing.deepstack_object_driveway_streetā€. The latest images that are produced add the zone as well.

With everything I have stolen from people like you, I figured I should start sharing as well. Thanks again!

Here are my entities with the same camera but different zones:

image_processing:
  - platform: deepstack_object
    ip_address: 172.17.20.29
    port: 5000
    api_key: !secret deepstackapi
    save_file_folder: /config/www/images/cameras/driveway/
    save_timestamped_file: True
    # roi_x_min: 0.35
    roi_x_max: 1
    #roi_y_min: 0.4
    roi_y_max: 1
    show_boxes: true
    zone: "line"
    targets:
      - target: "person"
        confidence: 80
      - target: "dog"
        confidence: 30
      - target: "bicycle"
    source:
      - entity_id: camera.driveway
  - platform: deepstack_object
    ip_address: 172.17.20.29
    port: 5000
    api_key: !secret deepstackapi
    save_file_folder: /config/www/images/cameras/driveway/
    save_timestamped_file: True
    # roi_x_min: 0.35
    roi_x_max: 1
    #roi_y_min: 0.4
    roi_y_max: 1
    show_boxes: true
    zone: "street"
    targets:
      - target: "person"
        confidence: 80
      - target: "dog"
        confidence: 30
      - target: "bicycle"
    source:
      - entity_id: camera.driveway

Here is an example of the zones.

2 Likes

@markss It looks like your source entity_id is pointed to a file rather than a camera. So put whatever your camera entity is there (mine is camera.driveway). That will create the image_processing entity based on the camera name (mine is image_processing.deepstack_object_driveway). It took me a while to figure out how simple it wasā€¦ Your confusion probably comes from the example that shows camera.local_file but it is an actual camera. It uses that camera to take the image to scan. Remove your the name: part from the source. Hope that helps!

1 Like

Hi All, I have been trying to get the facial recognition part of this awesome addon working but have hit some troubles which i was hoping someone else here might have come across before. I am able to run the deepstack_face platform and the deepstack_object with no problems. When I run the image_process.scan it will find a face in the image and creating the snapshot. But when I try to use the teach_face service I get the error below. I have the Object and Face deepstack settings running in the one container if that may be a problem but through my searching I thought this was ok. Any ideas of what I could try would be greatly appreciated. My Config and errors are below.

Source: custom_components/deepstack_face/image_processing.py:259
Integration: Home Assistant WebSocket API (documentation, issues)
First occurred: 17:28:03 (1 occurrences)
Last logged: 17:28:03

[140325678966240] Error from Deepstack request, status code: 400
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 135, in handle_call_service
    await hass.services.async_call(
  File "/usr/src/homeassistant/homeassistant/core.py", line 1445, in async_call
    task.result()
  File "/usr/src/homeassistant/homeassistant/core.py", line 1484, in _execute_service
    await self._hass.async_add_executor_job(handler.job.target, service_call)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_face/image_processing.py", line 161, in service_handle
    classifier.teach(name, file_path)
  File "/config/custom_components/deepstack_face/image_processing.py", line 259, in teach
    self._dsface.register(name, image)
  File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 277, in register
    response = process_image(
  File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 130, in process_image
    raise DeepstackException(
deepstack.core.DeepstackException: Error from Deepstack request, status code: 400
type or paste code here
  - platform: deepstack_object
    ip_address: xxx.xxx.x.xx
    port: 83
#    api_key: mysecretkey
#    scan_interval: 10000
#    confidence: 60
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: True
    timeout: 4
#    roi_x_min: 0.35
#    roi_x_max: 0.8
#    roi_y_min: 0.4
#    roi_y_max: 0.8
    targets:
      - target: person
        confidence: 80
#      - dog
#      - car
      - target: cat
        confidence: 60
    source:
      - entity_id: camera.right_side_camera
        name: right_side_object_detection

  - platform: deepstack_face
    ip_address: xxx.xxx.x.xx
    port: 83
#    api_key: mysecretkey
#    timeout: 5
    detect_only: False
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: True
    save_faces: True
    save_faces_folder: /config/www/faces/
    show_boxes: True
    source:
#      - entity_id: camera.dahua_mediaprofile_channel1_mainstream
      - entity_id: camera.doorbell_camera_hd
        name: doorbell_face

Serive data for image_processing.deepstack_teach_face

{
    "name": "name",
    "file_path": "/config/www/faces/name-face (2).jpg"
}

Hi @vijay, Some things to try:-

1. Aspect Ratio of Image
I see at the top of the log it says Error from Deepstack request, status code: 400. I got this error for teaching faces when it didnā€™t like the resolution/aspect ratio of the image. I took the pictures on an iPhone. I had to change the camera picture ratio on the iPhone from 4:3 to square and then it worked ok.

2. Deepstack VISION-FACE Parameter
I would also check you are running deepstack with the -e VISION-FACE=True parameters

docker run -e VISION-FACE=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

Check the deepstack docs for your actual command in the Face recognition section

3. Remove Spaces from file name
Not sure if this would make any difference but you could try re-naming the file so it doesnā€™t contain any spaces. To something like:
name-face-2.jpg

4. Further troubleshooting
To rule out your Home Assistant config you can teach a face using python. Create this file in the same directory as your test image on a machine that you can run python on. Edit your deepstack image filename (name-face-2.jpg), IP Address/Port (192.168.1.xxx) and userid (Vijay)

dsregfacetest.py

import requests

user_image0 = open("name-face-2.jpg","rb").read()


response = requests.post("http://192.168.1.xxx:5000/v1/vision/face/register",
files={"image0":user_image0},data={"userid":"Vijay"}).json()

print(response)

then run dsregfacetest.py

Then check the deepstack terminal console and see if you still get error 400. If you do get error 400 its nothing to do with your homeassistant config. I would then test using the test images from the deepstack site.

1 Like

I created i custom component that could used close to the to deepstack components.
This component uses the generated images from the deepstack component.

It offers the following functionality:

  • Create a camera entity from images. It can be used for several images in a directory, to show a slideshow/timelaps from these images without generating a intermediate file
    The orignal homeassistant local file camera, allows only to display a single image. This camera shows an imagelist as slideshow/Timelapse. The delay time between the images can bet set in the configuration or with a service. For more info, see the repository

  • create a animated GIF or MP4

  • File handeling services for snapshots (JPG,PNG,GIF and MP4), delete and Move

Examples are provide how to easily create a smart surveillance system within homeassistant with a few simple script.

6 Likes

Thatā€™s really nice! Have to check it out!
Would be great if it works with Zoneminder APIā€¦

Just saw this in the descritption for HASS-Deepstack-object.

Home Assistant setup
Place the custom_components folder in your configuration directory (or add its contents to an existing custom_components folder). Then configure object detection. Important: It is necessary to configure only a single camera per deepstack_object entity. If you want to process multiple cameras, you will therefore need multiple deepstack_object image_processing entities.

Iā€™m using this in my configuration.yaml and it worksā€¦ Should I change it to one setup per camera?

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.209
    ...
    ...
    show_boxes: True
    targets:
      - target: person
      - target: vehicle
      - target: car
      - target: bus
      - target: truck
      - target: cat
      - target: mouse
      - target: bird
    source:
      - entity_id: camera.door
        name: deepstack_object_door
      - entity_id: camera.mot_parkeringen
        name: deepstack_object_mot_parkeringen
      - entity_id: camera.tomten
        name: deepstack_object_tomten
      - entity_id: camera.pathway
        name: deepstack_object_pathway
      - entity_id: camera.tapo
        name: deepstack_object_tapo

Thanks so much for a detailed response. Iā€™ll be sure to give all these options a try. Appreciate your help. I am almost positive it will be the resolution problem as I was trying to feedback the captured cropped face images that the face component saves each time a face is detected.

Cheers

I did change to one setup per camera because I needed to have unique ROIs and confidence levels for each camera location.

Thanks! Make senseā€¦

Hi @jamos, it was the file quality issue. Tried some better resolution images and I am not getting that error anymore. Thanks again

Thanks Finagain that helped! I did get it working after simplifying all the names, another helpful person also responded directly.

In the interests of helping others in the future:

#### FILE CAMERA
  - platform: local_file
    name: file_front_door
    file_path: /config/www/cameras/motion-snapshot-frontdoor1.jpg

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.201
    port: 5000
    api_key: <snip>
    save_file_folder: /config/www/cameras
    save_timestamped_file: True
    targets:
      - target: person
    source:
      - entity_id: camera.file_front_door
        name: front_door_person_detector

edited - update to include an image of the object in the message
For those that have this set up and use node red and ios notifications.

I created a templated IOS notification for deepstack events and thought others could use.

The events all triggers from a deepstack.object_detected event. A switch node then directs which object triggered event and send to a change node to capture some of the payload and sets them in the flow with some strings to create a meaningful message on your phone. Requires no changes the function or call service node.

Here is the flow:

[{"id":"1a5277d2.1344c8","type":"tab","label":"Deepstack IOS Notification","disabled":false,"info":""},{"id":"beab4dc5.0a174","type":"server-events","z":"1a5277d2.1344c8","name":"deep stack object detected","server":"2a12269e.94634a","event_type":"deepstack.object_detected","exposeToHomeAssistant":false,"haConfig":[{"property":"name","value":""},{"property":"icon","value":""}],"waitForRunning":true,"x":190,"y":200,"wires":[["1206ad98.130652"]]},{"id":"3cbd2461.d2750c","type":"function","z":"1a5277d2.1344c8","name":"Create IOS Alert","func":"var entity = flow.get('entity');\nvar object_type = flow.get('object_type');\nvar confidence = flow.get('confidence');\nvar msg1 = flow.get('msg1');\nvar msg2 = flow.get('msg2')\nvar msg3 = flow.get('msg3');\nvar msg4 = flow.get('msg4');\nvar ios_target = flow.get('ios_target')\nvar image_url = flow.get('image_url')\n\nvar final_msg = `${msg1}` +`${object_type}` + `${msg2}` + `${entity}` + `${msg3}` + `${confidence}` + `${msg4}`\nvar payload = {\"data\":\n{\n    \"message\": `${final_msg}`,\n    \"data\": {\n        \"attachment\": {\n            \"url\": `${image_url}`,\n            \"content-type\": \"jpg\",\n            \"hide-thumbnail\": false\n        }\n    }\n       \n    }\n}\n\nmsg.payload = payload\nmsg.topic = `${ios_target}`\n\nreturn msg","outputs":1,"noerr":0,"initialize":"","finalize":"","x":1080,"y":200,"wires":[["80a81c05.fbc12"]]},{"id":"80a81c05.fbc12","type":"api-call-service","z":"1a5277d2.1344c8","name":"Notify Target IOS with Message","server":"2a12269e.94634a","version":1,"debugenabled":false,"service_domain":"notify","service":"{{topic}}","entityId":"","data":"{}","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":1370,"y":200,"wires":[[]]},{"id":"593c5bca.f93ea4","type":"change","z":"1a5277d2.1344c8","name":"Set Variables","rules":[{"t":"set","p":"msg1","pt":"flow","to":"A ","tot":"str"},{"t":"set","p":"object_type","pt":"flow","to":"payload.event.object_type","tot":"msg"},{"t":"set","p":"msg2","pt":"flow","to":" was detected in the ","tot":"str"},{"t":"set","p":"entity","pt":"flow","to":"Driveway ","tot":"str"},{"t":"set","p":"msg3","pt":"flow","to":"with ","tot":"str"},{"t":"set","p":"confidence","pt":"flow","to":"payload.event.confidence","tot":"msg"},{"t":"set","p":"msg4","pt":"flow","to":" confidence","tot":"str"},{"t":"set","p":"ios_target","pt":"flow","to":"mobile_app_paul_phone","tot":"str"},{"t":"set","p":"image_url","pt":"flow","to":"https://external_HA-URL/local/snapshots/test/driveway_person_test_objects_latest.jpg","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":750,"y":200,"wires":[["3cbd2461.d2750c"]]},{"id":"1206ad98.130652","type":"switch","z":"1a5277d2.1344c8","name":"","property":"payload.entity_id","propertyType":"msg","rules":[{"t":"eq","v":"image_processing.driveway_person_test_objects","vt":"str"},{"t":"eq","v":"image_processing.driveway_objects","vt":"str"},{"t":"eq","v":"image_processing.garage_objects","vt":"str"}],"checkall":"true","repair":false,"outputs":3,"x":450,"y":200,"wires":[["593c5bca.f93ea4"],[],[]]},{"id":"4d810478.d9b9cc","type":"comment","z":"1a5277d2.1344c8","name":"change you your HA server","info":"This will listen to all deepstack.object_detected events","x":190,"y":160,"wires":[]},{"id":"899123b1.dc364","type":"comment","z":"1a5277d2.1344c8","name":"Edit your image processing objects","info":"create a switch for each image processing object","x":500,"y":160,"wires":[]},{"id":"216d717b.28a1ee","type":"comment","z":"1a5277d2.1344c8","name":"Edit your message and IOS target","info":"Change:\n  entity \"Driveway\"\n  msg1 \"A \"\n  msg2 \"was detected \"\n  msg3 \"in the \"\n  msg4 \"with \"\n  ios_target \"mobile_device_iphone\"\n  image_url your ha url to snapshot from deepstack latest\n  \nThis will send the following message to that phone.\n\n\"A Person was detected in the Driveway with 82.98% confidence\"","x":800,"y":160,"wires":[]},{"id":"ae43c5d7.393d38","type":"comment","z":"1a5277d2.1344c8","name":"No edits needed","info":"","x":1080,"y":160,"wires":[]},{"id":"997d46d2.a4fb58","type":"comment","z":"1a5277d2.1344c8","name":"No edits needed","info":"","x":1320,"y":160,"wires":[]},{"id":"2a12269e.94634a","type":"server","name":"Hingham Home","legacy":false,"addon":true,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":true,"cacheJson":true}]

X-Posted to Flow to send deep stack object notification and image to IOS notification

3 Likes

Hello All,

please can someone assist in my config? I have deepstack running on synology nas on LocalIP:85 which is accessible by home assistant os as they both on same range of IP.

camera platform is taking mjpeg_url from blueiris. Blueiris is running on 192.168.0.4:81
deepstack is running on 192.168.0.5:85

In HA developer tools, the state for image_processing.deepstack_object_entrancesd shows unknown.

Below is my config

camera: 
   - platform: mjpeg
     name: entrancesd
     mjpeg_url: http://192.168.0.4:81/mjpg/entrancesd/video.mjpeg
     username: xxxx
     password: xxxx
     authentication: basic

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.0.5
    port: 85
    api_key: ''
    custom_model: mask
    confidence: 80
    save_file_folder: /config/www/cameras
    save_timestamped_file: True
    always_save_latest_jpg: False
    scale: 0.75
    roi_x_min: 0.35
    roi_x_max: 0.8
    roi_y_min: 0.4
    roi_y_max: 0.8
    targets:
      - target: person
        confidence: 60
      - target: vehicle
        confidence: 60
      - target: car
        confidence: 40
    source:
      - entity_id: camera.entrancesd

Please can someone advise where Iā€™ve made the mistake?

Thanks

EDIT: Ok, i think there is no issue in the config. I tested by calling the service and it detects a person. Now the question, how to make image_processing.scan trigger automatically? do I have to automate this on trigger? what would be the trigger?