Face and person detection with Deepstack - local and free!

I don’t think it is a port mapping issue as I can run the image_processing.scan service and will see notification in the deepstack log.

Now one question would be if I am teaching it faces, I am sending it a path to the “image” to train. Does that image need to be in a volume mounted to the deepstack docker container or a volume present to hassio which then sends the “file” to the deepstack process?

Oh man docker can kick up so many issues like this… I recommend creating a folder in your HA config folder so it is definitely reachable

@robmarkcole That is where I originally had the file “/config/images/face.jpg”, but not luck.

here is what I originally tried but didn’t seem to fire the service that i can tell.

{
  "name": "paul",
  "file_path": "/config/images/paul.jpeg"
}

Might be similar to a docker issue I had, from the install docs:

The “mount path” has to be “/config”, so that Home Assistant will use it for the configs and logs. It is therefore recommended that the folder you choose should be named “config” or “homeassistant/config” to avoid confusion when referencing it within service calls.

Hi all
just released v1.0 which includes a bugfix and improvements to the way bounding boxes are displayed. Also it is now easier to display processed images using a local_file camera as a processed image with a fixed filename can be optionally saved. I am keen for any feedback on the way the processed image is displayed, so please let me know your thoughts! Also if anyone is using the processed images in notifications or automations I am interested to hear about that.
Cheers

Below: improved bounding box and display of processed image with local_file camera

1 Like

Hi all,

I’m trying to run Deepstack on a Raspberry Pi 4 running Raspbian Buster in docker using:

sudo docker run -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

However I keep getting the error:

standard_init_linux.go:211: exec user process caused “exec format error”

Does anyone know if the deep stack docker image compatible with ARM architecture of the Pi? Any ideas how to get it running correctly greatly appreciated :slight_smile:

Pi is not supported yet, but the deepstack team are working on it

2 Likes

Oh, I was hoping to get this running on pi 4. Any idea of a timeline? Maybe I can run the docker on my Synology in the mean time.

I don’t have a timeline, but was told ‘soon’. Yes I think someone else on this thread tried it on a Synology

Robin, thank you for producing this component. Works great and it’s a timely replacement for Tensorflow as it appears to be btoken.

I tried pulling your component with HACS but because both components are in the same repository HACS pulls the first one: deepstack_face.

Would be great to get both components via HACS.

Fix is easy. Clone your repo; delete the second component in each repo and re-label. So you end up with 2 repos: robmarcole/deepstack_face and robmarcole/deepstack_object

Thank you

Hi @juan11perez there are now 2 repos to support HACS as you suggested, thanks!

3 Likes

@robmarkcole
That was fast! Thank you. They can both now be pulled/monitored via HACS.

Is there any way to pass along witch object (camera) that triggered the image_processing.person_detector event?

I would like to be able to customize what happens depending on witch camera that has been triggered.

The triggering entity_id is in the payload, see the docs

Hi,

This is what I am getting:

{“topic”:“image_processing.person_detector”,“payload”:“1”,“data”:{“entity_id”:“image_processing.person_detector”,“old_state”:{“entity_id”:“image_processing.person_detector”,“state”:“0”,“attributes”:{“target”:“person”,“target_confidences”:[],“all_predictions”:{“car”:1,“potted plant”:1},“save_file_folder”:"/config/node-red/data/img/",“friendly_name”:“person_detector”},“last_changed”:“2019-07-25T13:31:35.486406+00:00”,“last_updated”:“2019-07-25T13:42:07.840917+00:00”,“context”:{“id”:“a5b8fc40650e461fa7c08353f985ea8d”,“parent_id”:null,“user_id”:“5b66d5e0fa8246e0b535e28565e323bb”}},“new_state”:{“entity_id”:“image_processing.person_detector”,“state”:“1”,“attributes”:{“target”:“person”,“target_confidences”:[98.2],“all_predictions”:{“potted plant”:1,“person”:1,“car”:2},“save_file_folder”:"/config/node-red/data/img/",“friendly_name”:“person_detector”},“last_changed”:“2019-07-25T15:05:58.785992+00:00”,“last_updated”:“2019-07-25T15:05:58.785992+00:00”,“context”:{“id”:“c85f8a72dfd749e89bda108b25bcf04c”,“parent_id”:null,“user_id”:“5b66d5e0fa8246e0b535e28565e323bb”},“timeSinceChangedMs”:5}},"_msgid":“49627199.01d78”}

image

To get the entity_id to work, I would have to have one “DeepStack People Scan” (image_processing.person_detector) per camera, that would not be as clean.

Unfortunately thats the only option

Hi all
if you own a Google Coral USB accelerator you can now use this with the Deepstack Object Home Assistant integration. Using this hardware acceleration I can process images in sub 200 milliseconds, which is a big improvement on the ~1 second this was taking on my Mac without the accelerator.
Cheers

1 Like

Hey @robmarkcole thanks for keeping this going :slight_smile: I have a question for you that is stumping me at the moment.
Previously I had motion detection from my NVR (Software called Shinobi) send a POST to Node-Red to a specific URL Node-Red is listening for (so i know the camera)
Then I grab a snapshot of the image and save it to /www/snapshot/{{camera}}/latest.jpg
Next I use image_processing on the jpg captured to look for someone.
If the value is > 0 send an alert.

This was to reduce false motion detection (trees/shadows etc).

Now with save_file i’m struggling to understand what actually triggers the image_processing?
Do I still need to force the image_processing by calling the service, or if I have target: person enabled as well, is deepstack monitoring all the entities in the image_processing list (in the config?) for a person to appear and be recognized?
I imagine not given the load on processing - but the docs seem to imply that (or i’m misunderstanding?)

What I was hoping for it to use an all events node, but listen only to image_processing.file_saved then parse out the JSON payload to which camera from entity_id, and file location from file. Then send an alert with the image (with bounding boxes), the camera location in the message.

You have 2 options.
1: display your latest.jpg as a file_camera and set the scan_interval on the image processing component to periodically scan the image
2: Use the folder_watcher and an automation to process the image when the file on disk is updated. I do this here

I suggest option 2 is more efficient.
Cheers

2 Likes

ah, in your example you still use motion detection to call the image_processing service.
I wasnt sure if a person detected was triggering these events
OR
I still had to detect motion, trigger image_processing, then enact the downstream stuff.