Deepstack Automation Not Firing Consistently

Hi folks, I have a similar request for help per this thread > Automation with DeepStack And Motions Sensors had in that I’m trying to fire deepstack object detection when my Arlo cameras detect motion.

I can’t seem to get my object detection to operate consistently. It worked okay yesterday with one camera, I applied the same automation to two other camera’s then my HA became unresponsive when I walked around the house to test it out. Perhaps it was a little too much for my poor old NUC?
In any case it’s just not working anymore… Any clues.
My approach:
I’m using this excellent integration:

and this:

my config.yaml snippet:

### ARLO SIDE (ARLO 4)
  - platform: local_file
    name: aarlo4snapshot
    file_path: /config/snapshots/arlo4/aarlo4snapshot.jpg
    
  - platform: local_file
    name: aarlo4deepstack
    file_path: /config/snapshots/arlo4/sideyard_latest.jpg
### Side Yard
  - platform: deepstack_object
    ip_address: 192.168.X.YY
    port: 5000
    api_key: randomstuffhere
    save_file_folder: /config/snapshots/arlo4
    show_boxes: true
    roi_x_max: 0.8
    roi_y_max: 0.8
    targets:
      - person
    source:
      - entity_id: camera.aarlo4snapshot
        name: sideyard

My automation:

- id: '1595199677892'
  alias: Arlo Detection Side
  description: ''
  trigger:
  - entity_id: binary_sensor.aarlo_motion_arlo4
    platform: state
    to: 'on'
### THIS WAS 'true' just changed it
  condition:
  - condition: state
    entity_id: alarm_control_panel.aarlo_doukiarlohub
    state: armed_away
  action:
  - data:
      entity_id: camera.aarlo_arlo4
      filename: /config/snapshots/arlo4/aarlo4snapshot.jpg
    service: aarlo.camera_request_snapshot_to_file
  - delay: 00:00:30
  - data: {}
    entity_id: image_processing.sideyard
    service: image_processing.scan
  - delay: 00:00:05
  - data:
      data:
        attachment:
          content-type: jpeg
        entity_id: camera.aarlo4deepstack
        push:
          category: camera
      message: Side Motion Detected Of Type {{ state_attr('image_processing.sideyard','summary')
        }} {{now().strftime("%H:%M:%S %d-%m-%Y")}}.
    service: notify.mobile_app_my_iphone

aarlo4snapshot.jpg is being written into the /config/snapshots/arlo4 folder okay (and subsequent camera.aarlo4snapshot is being created, image appears fine from the camera).

But the sideyard_latest.jpg isn’t being generated.
(nor is it for the other three Arlo’s anymore).

I’m sure it’s a simple issue, but can’t see it anymore…

When I run the image through deepstack via curl cli I get “success”:true… Processing takes about 1-2 seconds. the image sizes from the Arlo are ~80-100kb and are in .jpeg format okay.

What silly mistakes am I making?!

Thanks in advance,

I remember having the same issue if I triggered the motion detection too fast on the different cameras. So I had to put in some more delay before doing the deepstack part. But since you get the success respond when doing the curl, then it probably isn’t that.

Maybe this is silly, but have you tried seperating the saved snapshots into subfolders: Sideyard, Frontyard, Backyard etc.?

Thanks @Yoinkz, I should be more specific.
Each Arlo has it’s own folder, but I use the same subfolder for both the snapshot and the output of the deepstack processing engine.
I will try writing into different folders though to see if that helps any, good idea, could be an access thing perhaps.
Also, when i issued the manual curl command to make sure deepstack was running I did it from my account on the NUC, not from within the docker container running HASSIO.
Dockers still do my head in a little when it comes to CLI file structure access. I can ssh to my docker running hassio, but curl isn’t a command I can run as I assume I’ve not installed it within that container…
I’ll try different subfolders under /config to see if that yields a different result.
Thanks for the reply, very much appreciated.

Okay I have different folders under /config/ now for the Arlo snapshots (/config/snapshots/arlo4, with subfolders for each Arlo within that) and Deepstack outputs (/config/deepstack/arlo4, also with subfolders for each Arlo within that).

I’ll fire off an capture later when I get a chance, I’ll let you know.
Cheers,

Did you get the chance to test it yet?

Thanks so much for following up.
I tested this morning…
No success yet. I’ll test again today (shortly).
I think I need to adjust my wait timers for the snapshots to be deposited into the appropriate folder.
Could I ask, if deepstack doesn’t identify an object I doesn’t output a file (with the regions of interest, as there aren’t any) does it? I’m not seeing the files being written for the front door (source image wasn’t great), at the side camera it didn’t fire motion detection for some reason.
I’ll be back!
:slight_smile:

I “think” the deepstack processing is occurring.
BUT, I think not I’ve not got my Arlo app and this integration working seamlessly.
The arlo app is configured to record video when motion is detected. At the same time I’m asking HASSIO to capture a still, process it and send a notification (with attachment).
I assume it’s a serial / linear process here. Whatever asks for the camera first needs to finish, then pass it on to the next process?
I’m seeing video being recorded okay (in the arlo app),
the motion that triggered the automation cleared,
no still-image captured in time (say a person at the front door) as the camera was recording video.
as a result an irrelevant image was captured (object moved on), image with no people (what I’m watching for) gets uploaded for analysis.
Deepstack returns nothing as no object was detected…

I think that’s what’s happening.
Are you using the arlo’s too?
Do you use HASSIO integration to capture video or the Arlo app?

Incidentally I think your suggestion of separating the folders did the trick. It’s much more responsive now. Thank you.

Inching forward!
Cheers,

Hey,

Yeah I agree that could be the case.
No I don’t use Arlo myself, was using it some time ago, but I have moved to Unifi with PoE cameras. But I do the exact same, by capturing a picture when motion is detected.

Oh that’s great news.

Fingers crossed it will stay that way then :blush:

Could I ask, how do you capture video and a still?
Or do you run the deepstack process on the unifi file captured?

I think I may have a way forward using the Arlo system to record stills and videos, rather than doing across both HA and Arlo environments.
I plan on using the generated still from AARLO as the source of the deepstack engine.

AARLO generates a thumbnail and uploads it here on Amazon S3 storage.
The URL for each file that’s captured is different every time, so the path changes which means I can’t simple download via static configs…

I want to acheive something like this via the HA downloader…

filename: aarlo4snapshot.jpg
overwrite: true
subdir: arlo4
url: >-
  '{{state_attr('camera.aarlo_arlo4', 'last_thumbnail')}}' 

where the last_thumbnail attribute will be a different path each time an image is captured…
Any thoughts on how I can use a dynamic attribute as a static entry for a service call?

Hmm I’m not sure how you would be able to pull that thumb from Amazon.

The way I have done it is to create a generic camera pointing to my Unifi cams that provides a jpeg still picture. Then HA saves that screenshot every 30 second or twenty second depending on which sensor was Triggered. When saved, the deepstack engine looks for any files created and then it runs its magic. Of course I have a few delays in between these steps, but I could imagine you would need a bit more when using Arlo and the cloud as storage.

When processed I fire the Notification action with the picture generated from deepstack using the high priority. I have had several issues when just firing the Notification without the high priority parameter. When receiving a lot of notifications it happened that the pictures were the same even though I could see they were different in HA. But that did the trick. Sorry just a sidenote.

Great insight. So you’re using HA for capturing stills and video? or just stills for passing to deepstack?
I’d prefer to do it in just one platform if at all possible. Currently trying the arlo app…
Hence trying to download the uploaded thumbnail as that’s the first thing that gets captured before video kicks off I think.

If i do it all in HA then I’ll need to start running scripts to overwrite old video files, etc.
But I’d have much more control…
You’re still using the Unifi video suite for video capture?

I have actually added my cameras to HA two times.
The first one is the live feed that the NVR streams (with a delay of 15 seconds). For a standard stream that is okay, but for the deepstack part it is a bit problematic. Because I use some Xiaomi Sensors that triggers the scan when they are Triggered, in order to reduce the motions triggers that the cameras do. Often ‘nothing’ is in front of the camera, but it got Triggered by a bug flying closely or the sun/clouds made a huge light difference. So by using both the motion sensor in the camera and the Xiaomi sensor I have come to a good compromise I think.
But the reason why I added it two times was to add a generic camera in HA that then takes the snapshot.jpeg file that the cameras also provide. That picture is refreshed every single time it is being pulled/requested. So HA just needs to pull it when the sensors are Triggered and then I know I get the picture from the exact same time the sensors went off.

Some people might have it working with the standard camera feed, but I just had problems before using that stream because of the delay, so why change it now when I got it working :blush:.

You are right about the delay thing. You need to make sure that all things are processed in order to get the right autcome.

Something like:

  1. Something triggers so Aarlo records.
  2. Arlo created a thumb
  3. The thumb is being saved to Amazon
  4. You want HA or Deepstack to pull the picture to analyze
  5. You need to wait for deepstack to finish its analyze and the creation of the analyzed file.
  6. Fire the Notification with the generated file.

I do it all in HA but of course I do also get a lot of files saved in the libraries I hav over time. I clean them up manually, but could automate it. I haven’t got that far yet, but it is not a huge issue for me, since I have a large drive for my HA installation. But of course it would be much more pretty with an automation :smiley:

1 Like

anyone else seeing problems with deepstack.object_detected not firing to trigger automations when an object is actually detected? It shows up in the event log and the object shows the change in state.


.

and my simple automation that used to put an entry in my log and open my gate:

- id: '1588448219976'
  alias: Front Gate Object Detection = CAR, PERSON
  description: ''
  trigger:
  - platform: event
    event_type: deepstack.object_detected
    event_data:
      event_data:
        object: car
  - platform: event
    event_type: deepstack.object_detected
    event_data:
      event_data:
        object: person
  action:
  - data:
      message: Car or Person ROI Detected
      name: deepstack.object_detected
    service: logbook.log

monitoring the event w/ dev tools shows nothing after the log entry is created from the detection:

I know the event is there, its clearly being logged in the db with confidence over 90% (my threshold is 65%)

anyone else seeing this? I’m on 0.116.4 but its been failing for quite some time.

Jeff

Yes. Similar for me. Seems a bit hit and miss as to whether they fire. Whereas 5 minutes earlier fired okay.
I’m also on 0.116.4, similarly though has been a chronic issue. Love it when it works.
Also, set my threshold to lower than default to see if It would be more consistent.

I’m working on this right now… Interestingly, I can fire the event in dev tools, listen to it, see it and it triggers the automation just fine.

but if I put the object in front of the camera (person/car/truck), I see it detected in the log but the event never fires, nor does the automation.

Very strange. Something is amiss in Candyland.

@robmarkcole, any input here? You are Clearly the expert, and I’m not seeing any other chatter about this issue unless others are just being quiet. See the setup two post above.

My setup has changed a little since my initial post (but more streamlined monitoring for and acting on events firing). But same symptoms persist.
thanks @jazzmonger for posting!

Ok, I FINALLY figured this out. For whatever reason, specified “target” objects in configuration.yaml come across as “name:”, NOT “object:” in the return results when the AMCREST camera triggers the deepstack.object_detected event.

    targets:
      - car
      - truck
      - person

So in Automations this:


trigger:
  - platform: event
    event_type: deepstack.object_detected
    event_data:
      object: car

Should be this:

trigger:
  - platform: event
    event_type: deepstack.object_detected
    event_data:
      name: car

Phew. I feel like a freaking undercover detective. Not sure when or where this changed but it changed in the past 6-8 weeks or so in something.

See the recent Problem Report on this which I should have looked at long ago. Oh well it was good to exercise the brain to figure it out.

Works great now.

Jeff

1 Like

Thanks @jazzmonger.
I’ve had event_data: name: set for a little while now.
I’m still seeing it fire inconsistently.
Although throw in the mix the Arlo folks making changes to their API’s and I sometimes get a little confused as to where the fault may lie.

Update: It’s behaving it seems for the moment.
The last three days have been reliable! Giddy Up!

I’ve tidied up the readme, there were a couple of small errors in there

Cheers, I’ll double check what I’ve got configured against the updated doco.

Love your work btw!

1 Like