Updated whitelist_external_dirs to allowlist_external_dirs in line with updated documentation. I don’t see that in 0.113 release notes but the documentation seems to have changed at the same time
I have an object to detect, so it’s possible that files are only written when objects are detected. @robmarkcole can you confirm?
When we call image_processing.scan, what happens exactly? Does it just take a single snapshot from the camera and look at that one image? Or does it do multiple? Or does it use the camera snapshot url? How does that part work?
I asked ^ because if my motion detection goes off and deepstack only processes a single frame, it may not detect a person (intruder) is in the room as the camera may not see the full body yet. So if that’s the case, I’ll call image_processing.scan once every second until the motion detection clears. @robmarkcole is that what you’d recommend? Or is that already taken care of to a certain extent?
{“success”:false,“error”:“Custom vision endpoint not activated”}
I’ve tried to run a new container from deepstack image with additional “-e VISION-CUSTOM=True”,
it does show my model’s api (v1/vision/custom/openlock):
@MSmithHA calling scan will process the latest image only. If you want to process multiple consecutive images you will need to call scan multiple times as appropriate
I’ve got everything working except for one thing…my automation is not firing. Based on the post regarding the new events, I think I should be able to add an entitity_id to my event_data, as such:
- alias: Person Detected In The Kitchen While Armed
condition:
condition: or
conditions:
- condition: state
entity_id: alarm_control_panel.home_alarm
state: armed_away
- condition: state
entity_id: alarm_control_panel.home_alarm
state: armed_home
trigger:
- platform: event
event_type: image_processing.object_detected
event_data:
object: person
entity_id: image_processing.kitchen_camera_1_person_detector
action:
- data:
entity_id: media_player.security_announcements
message: Person Detected In The Kitchen
service: tts.google_say
- data_template:
entity_id: media_player.security_announcements
volume_level: 0.35
service: media_player.volume_set
When I look at image_processing.kitchen_camera_1_person_detector, it shows a person was detected, and my alarm is armed, however the automation shows that it’s never been fired. Does entity_id in event_data not work? Or is there a different way I should go about it? I also tried setting the entity_id in the trigger to be my camera (camera.kitchen_camera_1), but still no luck. I have multiple cameras and I want to know which one detected the person.
Sorry if this is a very noob question, but if I want to utilize both deepstack_object and deepstack_face, do I need to run two separate containers, each with a different port? I currently have deepstack_object detecting persons correctly with port 5000. Adding a nearly identical but new config entry under image_processing but changing it from deepstack_object and deepstack_face, rebooting, and calling the scan service call with the new image_processing.face_counter entity doesn’t work.
When the motion sensor is triggered a new entry is created on DeepStack server [GIN] 2020/08/10 - 17:58:29 | 200 | 70.48µs | 192.168.1.130 | POST /v1/vision/detection
HA status remains “unknown”
Attributes
ROI person count: 0
ALL person count: 0
summary:
{}
objects:
unit_of_measurement: targets
friendly_name: deepstack_object_outside_ffmpeg
Posting back here after some time exploring how to do the same thing without docker as I couldn’t wait for a docker free solution… I am curious as to what models deepstack uses. It seems to be running on a keras/tensoflow framework?
I have implemented YOLOv4 with opencv framework as an HA native component and my own face recognition with opencv DNN face detection/dlib encoding and SVM trained classifier for facial determination also as an enhanced version of the HA dlib component (it is both faster and more accurate). I wonder how the performance compares to deepstack models.
But I never see the log in the Docker ENV and I check the faces list and there is nothing in there. So I am not sure why I am getting True and Success but have it never register.
@CarpeDiemRo everything looks fine, try lowering the confidence in config to 20. Also check there are people in the image. You can also use CURL to check the results from the API as described in the readme.
@rafale77 the model used depends on the version you are running, soon yolo5 will be default
Good to know.
Did you know about the controversy around yolov5?
I think I will start pushing code to the HA GitHub repo to propose my updated dlib integration and yolov4.
My experience with it has been pretty extraordinary, catching people occupying less than 5% of the frame with neighbors walking across a tiny corner of my video stream.