ah…that’s sounds like a good explanation to what I see
I had a look at all change logs of deepstack_objects (but not face!) and didn’t find it: I had to dig more :-S
May I ask why scan_interval was remove?
I found it quite useful and there are a couple of use cases which I can’t imagine how to do otherwise, for example:
I use deepstack_objects + Coral rest server as a presence detection when not at home --> it is a reliable alarm system
I trigger “light on” in a room by PIR but I trigger “light off” using cameras+deepstack to make sure people are not in the room anymore --> this is much more robust than PIR specially if like sitting on the sofa
would you consider to re-add it or is there a good reason to keep it out from the integration?
Obviously if there is an alternative way to fire the service every N seconds it would be fine, but I am not aware of.
Default scan_interval is bad design for image processing, particularly if they are calling a cloud service. Ideally we just scan when a new frame is available, but that is not implemented in HA yet. Anyway you can use an automation to do periodic scan, so that way it is at least opt it
I was completely unaware of time pattern! thanks for the hint!
Robin, I agree it is a bad design, but in my case being local service (Coral) it worked well!
Anyway I will use the automation as long as HA has not a smarter implementation
@mLaupet regarding motion trigger it is correct what you are saying but given the fact I am using for both light and alarm control it would be an unnecessary duplication
I am coming to this same conclusion as well. I am actually wondering if is should be a switch you could flip so that the processing would be done a t high frequency while the switch is on, and stopped when the switch is off. We could then use events to flip the switch and automatically turn it off after some time.
I use motioneye to take snapshots when motion is detected. The snapshots are saved to a folder “/config/snapshots/camera/in” folder that is monitored by the folder_watcher hassio service. When folder_watcher detects a change in the file (new snapshot) it triggers an event. My node-red flow detects this event and calls the processing service.
No reason why you could not set an interval snapshot instead of motion (as motion will be unreliable due to your use case). There will be zero overhead until the snapshot is taken (I moved that workload off to a Raspberry Pi with a camera module attached to it). When the snapshot arrives, it is detected by folder_watcher, the processing service is called, and you get a result that you can use in your workflow.
Triggering either from a motion detection from a the camera itself of from another sensor is not really a problem, it is acting on a single frame which may be problematic in my case because it may not be sufficient for a number of reasons. That’s why I am thinking of using a high frequency scan mode for a short period of time instead.
Object detection occurs with the hass deepstack addon on the high quality stream.
In my case motion is detected by other sensors but Shinobi can do motion detection too.
I’m looking for assistance on how to loop through snapshots and only send a single alert for person detected. Ideally, I would like a short gif animation or video where the snapshots were all merged, but one step at a time! The reason is to further harden the reliability that a person was detected within the motion trigger. As it stands there could be a number of reasons that a single snapshot doesn’t detect the person.
So it would look like this
Step 1 Blue Iris sends motion trigger
Step 2 image processing is scanned (looped x times)
Step 3 object detection performed
Step 4 snapshots merged into single file
Step 5 Notification sent
alias: object detection
trigger:
platform: event
event_type: deepstack.object_detected
event_data:
object: person
action:
service: notify.mobile_app_in2019
data_template:
message: “The garage has been left open”
data:
image: “local/deepstack_person_images/frontyard/person_detector_front_yard_latest.jpg”
I am really curious, running HA on Docker on my Synology. Would it be possible to teach, and detect with deepstack, which of our cats is in front of a camera?
Awesome tool for HA. I’m currently using in Unraid with everything in separate dockers. With my Celeron G4900 w/16GB RAM it’s taking 3-4 secs to analyze pictures. I’m using a curl command to save pictures from my Hikvision camera feeds to a jpg file then moving jpg file to /config/www. I’m then using another curl command to send latest analyzed photo to me via telegram. I set save_timestamped_file: false so my docker won’t get filled photos. This is what my note red looks like. I’ve also included a picture of my partner trying to trick the AI. That’s not a cat… If you have any questions about my setup feel free to ask.
I see a lot of people using node red with this integration, and I don’t have any node red experience myself. So could someone explain the benefits of node red for image processing vs using home assistant automations etc?