Thanks for the heads up. I have reolink cameras and I have been looking at creating privacy mass in that also.
Iāve spent several days trying to get this workingā¦ Iām thinking I must be missing something simple or just have something not setup right.
I have my HASSIO / Home Assistant setup on a Docker installation running on Ubuntu 18.04 in a Dell 3010 with a Core i5 3470 / 16GB Ram, 14TBytes of storage (256Gb SDD boot drive).
I have deepstack installed in the same Docker as a separate container running with the following startup command:
sudo docker run -e VISION-FACE=True -v /opt/docker_folders/deepstack:/datastore -p 8087:5000 --name deepstack deepquestai/deepstack
Iām using port 8087 rather than the default 5000 as it conflicts with another resource on my server, but Iām able to access the deepstack page with no problem.
I created two test python scripts that execute the example code from deepstack to test face / image registering and testing face recognition and they run perfectly from running the python scripts from the Ubuntu SSH shell.
Iāve installed the deepstack integration in my configuration.yml file using the following:
# Image Processing / recognition
image_processing:
- platform: deepstack_face
ip_address: 192.168.1.xxx (my server ip)
port: 8087
timeout: 5
detect_only: False
scan_interval: 20
source:
- entity_id: camera.front_door_cam
name: face_recog
My front_door_cam is a MotionEye integrated camera from a Raspberry Pi that seems to work with no issues and has integrated into HA with no issues so I donāt see that as a problem.
My problems are:
1 - The āimage_recognition.face_recogā device never shows any status other than āunknownā. I have tried triggering the ādeepstack_teach_faceā service from the developer tools service tab using the following but it doesnāt seem to do anything:
image_processing.deepstack_teach_face
{
"name": "New Person",
"file_path": "/config/www/np.jpg"
}
The np.jpg for instance was an image of my wife. Using this method did nothing (no returned state, nothing in the logs).
When I call the service from the Ubuntu shell using my python script calling the REST API directly, it works just fine.
2 - The interface for the āimage_processing.face_recogā doesnāt show anything meaningful and I have no idea where to get to any detailed / advanced settings / output??? I see a lot of posts showing images and number of faces recognition boxes but mine has nothing like that.
I suspect that my HA is not properly communicating with the deepstack instance running in the docker container. Can anyone point out where my settings may have a problem?
I have the custom_components copied over for the deepstack_face integration so HA is able to integrate the object. Iām not using an API key for HA. Is the location for the ālocalstorageā required to be in a special area? Nothing I read says that and I have it in a custom location that I use for most of my docker containers.
I spent some time looking through the āimage_processing.pyā file in HA and I get most of it, but Iām confused as to where the āprocess_imageā function is tied into HA. Is it automatically called by āimage_processing.scanā?
Thanks for any insight. I can add more info if needed. I have attached a snapshot of what my HA face_recog looks like Nothing much here.
Hi everyone. Thank you for the great suggestions you are giving.
Iām working Deepstack (docker version) on a macbook. Itās an acceptable solution (2-3 sec average for every call). I would try on a raspberry (or pine64) board but I havenāt NCS2 or similar. Is it possible to deploy anyway a docker version? I tried with some arm-tagged docker image available on docker-hub but without success. Or in alternative is there a frigate - similar functionalities and same HA integration - docker to run on those boards?
Thank you very much in advance for your support
great Robinā¦ thank you
Hi @robmarkcole,
writing this post as I have an issue with deepstack integration but donāt have any clue why.
I had this working fine for month and suddenly stopped: I canāt find when/why.
So I run your Coral rest server on a PI and firing:
curl -X POST -F image=@people_car.jpg 'http://XXX.XXX.X.X:5000/v1/vision/detection'
I get the expected output so on PI side everything should work fine.
Port 5000 is reachable from outside so no connection issues.
Nevertheless I get āunknownā as sensor value for all my cameras.
This is my config
image_processing:
- platform: deepstack_object
ip_address: XXX.XXX.X.X
port: 5000
scan_interval: 5
# save_file_folder: /config/www/deepstack_person_images
# save_timestamped_file: True
targets:
- person
# - car
# - truck
confidence: 80
source:
- entity_id: camera.netatmo_corte
name: google_coral_corte
- entity_id: camera.camera_ingresso
name: google_coral_ingresso
- entity_id: camera.porta_ingresso_live
name: google_porta_ingresso
- entity_id: camera.camera_studio_rose
name: google_camera_studio_rose
- entity_id: camera.camera_studio_barba
name: google_camera_studio_barba
and I canāt find (maybe I donāt know where to look at!) anything in the logs.
any idea?
EDIT: also tried from another PC connected to the network and I can get proper output from Rest Serverā¦it is really HA canāt. Running
curl, etc...
from HA machine I get proper result as well
Hi all
I am personally getting multiple daily requests, on threads and in private messages and github issues, to help people debug connection/config/docker issues. I do not have the bandwidth to help everyone and it is become a real drain on my motivation. Please do not direct such requests to me personally, but just post them here for everyone in the community to assist. However before doing that search the threads, as the same troubles & their solutions are coming up multiple times.
Many thanks
Hi sorry, it wasnāt my intention to bother or annoy.
You are doing a great job!
Have you tried splitting up your cameras so you have a single camera entity per image_processing entity?
I have tested Deepstack running on:
-
RPi 3b+ running Buster Desktop with the NCS 2 and detections take roughly 1 second
-
QNAP NAS TS-251+ (Celeron with 8Gb of RAM using the novax docker image) and detections were taking about 6-8 seconds.
-
Finally, Deepstack running on a i5 MacBook Air through Docker with 8Gb RAM were taking roughly 2.5 seconds
I have also tried the Google Coral with Tensorflow which was returning predictions in under 1 second however the accuracy was nowhere near as good as Deep-stack
Thanks Robin. I actually was doing manual scans but wasnāt seeing the results.
I found the issueā¦ as I suspected it was a problem between the chair and keyboard
It basically was the way I was testing.
Now that itās workingā¦ Iāve been able to fine tune the facial detection and it works great. Face scan runs in about 2.5 seconds without any acceleration, just CPU. I plan to try the Google Coral USB at a later time.
It did take me a while to figure out how to break out / expose the returned properties of the detection object in HASS. I wound up using a Node Red flow thatās splitting the āmatched_facesā object array so I can get the Key values which are the names of the person(s) that were recognized. Wish there was an easier way to get the āwhoā from the recognition properties. I then re-combine the names into a single string (if more than one face was recognized) and I fire all of this off in MQTT topics so I can create multiple automations and events that subscribe to the topics without having to modify my Node Red flow.
Seems like there should be an easier way to get that info out of the results. If anyone knows an easier way, Iād be happy to hear about it.
Robin, thanks for making this available in HASS. It works really well once you figure out all the moving pieces.
Hi,
good suggestion, just tried but no luck
I tried to downgrade to v3.0 but nothing changed.
Thatās about what Iām getting too on a Ubuntu i5 setup.
Maybe it would be helpful to share what we are all seeing as far as response times with specific hardware.
Iām running the following setup:
OS Ubuntu 18.04
PC w/ Core i5-3470 processor
RAM - 16GB
HDD - 1 256GB SSD / 14TB Spinning
Acceleration - NONE
HASS and Deepstack install running on a Docker along with about 7 other applications on a headless server (no GUI)
Camera(s): 1 MotionEye / Raspberry PI camera (640x480) Resolution.
Image processing (Facial Recognition) response time: ~ 2.5 seconds
I suspect the biggest factors in the response time are the image resolution and the presence or lack of graphics acceleration.
Thanks. I think I am firing off of the event when total_faces is something greater than zero.
But, are there events to parse the āwhoā was recognized and āHow manyā were recognized if any? Iām using that info for specific logic in another part of HA to do certain things depending on āwhoā the facial recognition detected.
Itās a WIP so just working on some concepts right now. Iām just happy this is working finally
Interestingā¦ How are you calling the image_processing.scan service?
Through an automation or through Developer Tools?
I have a node-red flow that calls the service when motion is detected by my MotionEye camera setup. The flow then waits for the ātotal_facesā to be something greater than zero (or timeout 15 seconds).
@robmarkcole I know how that is Been doing this type of stuff for 35 yearsā¦ still fun
Try not to let it burn you out too much. Take a break when needed. Youāre work is greatly appreciated!