How does my automation know when a camera.record service on a stream is complete?

I’m running the 0.91.2 docker container image. I have a couple of cameras that are “stream” capable, and am trying to utilize that as part of an automation. What I’m trying to do is capture a video clip in an automation, and then send that clip in a notification using slack. And other foolishness… My automation looks like this:

- alias: doorbell fired
  initial_state: true
  trigger:
    - platform: state
      entity_id: input_boolean.doorbell_button
      to: 'on'
  action:
    - service: logbook.log
      data_template:
        name: Front Porch Doorbell
        message: "Front doorbell action invoked at {{ now().strftime('%Y-%m-%d %H:%M:%S') }}"
        entity_id: input_boolean.doorbell_button
        domain: input_boolean

    - service: camera.snapshot
      data:
        entity_id: camera.cam5
        filename: "/config/www/snaps/cam5.jpg"

    - service: camera.snapshot
      data:
        entity_id: camera.cam6
        filename: "/config/www/snaps/cam6.jpg"

    - service: camera.record
      data:
        entity_id: camera.cam6
        filename: /config/www/snaps/cam6.mp4
        duration: 15
        lookback: 10

    - service: image_processing.scan
      entity_id: image_processing.tensorflow_cam6

    - delay: '00:00:15'

    - service: notify.slack
      data:
        message: "Doorbell Video"
        title: "hass"
        target:
          - "#home"
        data:
          file:
            path: "/config/www/snaps/cam6.mp4"

What appears to be happening is that the notify.slack service is getting invoked before the camera.record service is complete, and the file out completely. I inserted a delay of 15 seconds to match the record duration; clearly that isn’t long enough.

I could lengthen that delay interval before invoking the notify service, but that just seems like a hack. I’d really like some sort of a wait/pause action until the camera.record service finishes its work and I know that the file is ready to be used.

I believe that the camera.record service invocation is running asynchronously with the rest of the automation’s action script, but I’m not aware of some synchronizing primitive that can be used here.

While I don’t have an answer but I’m experiencing a similar issue using the snapshot function too. I’m using hassbian v92.1. it’s as if all “actions” are running simultaneously regardless if there is a delay or not. I have a 10 second delay but seems the snapshot timestamp is not close to the notification timestamp. I know the delay is working as the notification is always delayed by 10 seconds.

I see you are using “tensorflow” whereas I’m not. Not sure if that matters.

Example:
trigger:

action:
… do something
… delay: ‘00:00:10’
… service: camera.snapshot

delay: ‘00:00:01’
notify:

Hey,
Have you figured out a solution?

I opened an issue on this a while back and got an explanation about it:
https://github.com/home-assistant/home-assistant/issues/22882

The reason we don’t manipulate the state currently is because some platforms that support stream already implement recording state when the device is recording locally or to the NVR.
The same is true for the streaming state: historically meant something else.
We need to have a larger discussion on if the new record service should alter this state, or if some other indicator should be created.

I don’t have a satisfactory conclusion. I just delay “long enough” after starting the record.

This spoils my intended user-experience of sending out a notification with an attached video. So I grab a still image when the doorbell button get pressed and just notify on that immediately. Sometime later, a video is available… which comes in a follow-up notification.

This idea of inserting a “delay” isn’t really a satisfactory solution - there ought to be some sort of synchronization operation available to detect when some asynchronous work has completed. Either throwing an event or changing the state of something…

Yeah I hope the fix it so we can get a state-change when its recording and not.
I the mean time I have been using a shell-script and input-boolean to check if the recorded output-file has been changed. Its a pretty good workaround.
It looks like this:

 input_boolean:
   camgrab_backdoor:
     name: camgrab_backdoor
     initial: off
     icon: mdi:camera

shell_command:
  camerafilechanged: bash -x /config/shell-scripts/camerafilechanged.sh

Automation:
automation
  - id: movement_backdoor
    alias: Movement backdoor
    initial_state: false
    hide_entity: False
    trigger:
      - platform: state
        entity_id: binary_sensor.backdoor_movement_field_detection
        from: 'off'
        to: 'on'
    condition:
      - condition: state
        entity_id: alarm_control_panel.home_alarm
        state: armed
    action:
      - service: camera.snapshot
        data:
          entity_id: camera.backdoor
          filename: '/config/www/backdoor.jpg'
      - service: input_boolean.turn_on
        data:
          entity_id: input_boolean.camgrab_backdoor
      - service: shell_command.camerafilechanged
      - wait_template: "{{ is_state('input_boolean.camgrab_backdoor', 'off') }}"
      - service: notify.telegram
        data:
          title: Movement Backdoor
          message: "Movement Backdoor"
          data:
            video:
              - file: /config/www/tensorflow/latest_backdoor.mp4
                caption: Backdoor movement

Shell-script:
 #!/bin/bash
now=$(date "+%s")
updated="false"

until [  $updated = "True" ]; do
  sleep 1

  frontdoor=$(date -r /config/www/tensorflow/latest_frontdoor.mp4 "+%s")
  backdoor=$(date -r /config/www/tensorflow/latest_backdoor.mp4 "+%s")
  patio=$(date -r /config/www/tensorflow/latest_patio.mp4 "+%s")

  if [ $frontdoor -gt $now ]; then
    updated="True"
    curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer secretloglivedtoken" -d '{"entity_id": "input_boolean.camgrab_frontdoor"}' --ssl-no-revoke https://haadress/api/services/input_boolean/turn_off
  elif [ $backdoor -gt $now ]; then
    updated="True"
    curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer secretloglivedtoken" -d '{"entity_id": "input_boolean.camgrab_backdoor"}' --ssl-no-revoke https://haadress/api/services/input_boolean/turn_off
  elif [ $patio -gt $now ]; then
    updated="True"
    curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer secretloglivedtoken" -d '{"entity_id": "input_boolean.camgrab_patio"}' --ssl-no-revoke https://haadress/api/services/input_boolean/turn_off
  else
    updated="False"
  fi
done
3 Likes

Hi.
I’m having the same issues in my automations and I like your approach very much, but I’m going to try it with an appdaemon app.
Still, there’s something I am missing in your shell script: I think you are checking the recorded file’s modification date against current time, but how can the modification time be greater than the current time?

I ran into this problem when configuring the duration of a motion-triggered camera.record. When I set the duration up from the default 30s to 3min, the camera’s motion detection state could change from on (detected) back to off (clear), and then back to on (detected), thus triggering the automation I have for camera recording while the first recording is still in progress. This caused an error in the logs that the camera stream was already being recorded.

As a simple fix, I decided to add an input_boolean to track the camera.record duration as a pseudo-state. At the start of the automation, I turn the boolean on, then I start the camera.record for a duration, then the actions are delayed for an amount of time that matches the camera.record duration, then the boolean is turned off.

Then I added the input_boolean as a condition to the automation, so that if a motion-detection trigger happened during a recording, it would not attempt to record a second concurrent recording.

automations.yaml

- id: bf2cc98f2f8a4b2d8c6ecb6ca2f1efac
  alias: yi_outdoor_1_record
  trigger:
  - entity_id: binary_sensor.yi_outdoor_1_motion_sensor
    from: 'off'
    platform: state
    to: 'on'
  condition:
  - condition: state
    entity_id: input_boolean.yi_outdoor_1_recording_state
    state: 'off'
  action:
  - entity_id: input_boolean.yi_outdoor_1_recording_state
    service: input_boolean.turn_on
  - data:
      duration: 180
      entity_id: camera.yi_outdoor_1
      filename: /camera-record/yi_outdoor_1_{{ now().strftime("%Y%m%d-%H%M%S") }}.mp4
    service: camera.record
  - delay: '00:03:00'
  - entity_id: input_boolean.yi_outdoor_1_recording_state
    service: input_boolean.turn_off

configuration.yaml (partial)

# camera
stream:
ffmpeg:
camera:
  - platform: ffmpeg
    name: yi_outdoor_1
    input: -rtsp_transport tcp -i rtsp://1.2.3.4/ch0_0.h264

# motion sensor
binary_sensor:
  - platform: mqtt
    state_topic: "yicam/motion"
    name: "yi_outdoor_1_motion_sensor"
    payload_on: "motion_start"
    payload_off: "motion_stop"
    device_class: motion
    device:
      manufacturer: Yi
      model: Outdoor 1080p
      identifiers:
        - 0C:8C:24:00:18:D5

# camera recording state
input_boolean:
  yi_outdoor_1_recording_state:
    name: Yi Outdoor 1 recording state
    initial: off
    icon: mdi:record-rec

Edit: second approach

I ditched the input_boolean and instead used a timer, to give a visual guide to the recording time in Lovelace UI, as well as providing the same boolean (active/idle state).

I had to play around with the delay and wait values, but ended up with a configuration that can record if HA is started with the camera already detecting motion, and also start a new recording if motion is still detected when the initial recording completes.

configuration.yaml (partial)

# camera
stream:
ffmpeg:
camera:
  - platform: ffmpeg
    name: yi-outdoor-1
    input: -rtsp_transport tcp -i rtsp://1.2.3.4/ch0_0.h264

# motion sensor
binary_sensor:
  - platform: mqtt
    state_topic: "yicam/motion"
    name: yi-outdoor-1 motion sensor
    unique_id: "yi_outdoor_1_motion_sensor"
    payload_on: "motion_start"
    payload_off: "motion_stop"
    device_class: motion
    device:
      manufacturer: Yi
      model: Outdoor 1080p
      identifiers:
        - 0C:8C:24:00:18:D5

timer:
  yi_outdoor_1_record:
    name: yi-outdoor-1 recording timer
    icon: mdi:record-rec

automations.yaml

- alias: yi-outdoor-1 motion-activated recording
  trigger:
    # when motion becomes detected
    - entity_id: binary_sensor.yi_outdoor_1_motion_sensor
      platform: state
      to: 'on'
    # when a recording ends
    - platform: state
      entity_id: timer.yi_outdoor_1_record
      to: 'timer.finished'
    # in case motion starts detected (and therefore doesn't change to 'on')
    - platform: homeassistant
      event: start
  condition:
    # if motion is detected
    - condition: state
      entity_id: binary_sensor.yi_outdoor_1_motion_sensor
      state: 'on'
  action:
    - wait_template: "{{ is_state('timer.yi_outdoor_1_record', 'idle') }}"
      timeout: '00:00:15'
      continue_on_timeout: 'false'
    - data:
        duration: 90
        entity_id: timer.yi_outdoor_1_record
      service: timer.start
    - data:
        duration: 80
        entity_id: camera.yi_outdoor_1
        filename: /camera-record/yi-outdoor-1_{{ now().strftime("%Y%m%d-%H%M%S") }}.mp4
      service: camera.record
    - delay:
        seconds: 90
1 Like

I still haven’t figured out how to get a video of exactly the length I want, and I am having to use random delay times to get close to what I want. I wanted a 10 seconds video, so I get 11 second video going with 2 seconds lookback, 15 seconds duration, and delay of 25 seconds.

I found a few creative ways people were using to prevent the automation from triggering again while it is still working on the first time triggered. I’ve seen booleans setup, timers, etc. By accident I think I found a “cleaner” solution. I have my actions in the form of a script. The automation itself waits for the triggers, then checks conditions, then calls the script. By looking at the logbook I noticed that when a script is called, it state changes to on, and when the script is done, it goes back to off. On my automation condition I check that the script I want to call is off otherwise the trigger stops.

You shouldn’t need any crazy solutions to make it only trigger once after the automation changes. Just change the mode to single and add a delay at the end of the automation. It will only fire once inside that delay window.

The underlying problem remains - you don’t really know how long it takes for the service call to cause the file to be written to the file system. I’d like to make sure it was completely written before invoking a service to send it as an attachment in a notification.

Inserting arbitrary delays is an unreliable, terrible hack. When you have a system with lots of concurrency as a fundamental part of its operation, you need to have synchronization primitives available so you can write reliable code. Just hacking a delay in there “fails to fail most of the time” but that’s different than working. If the system happens to be heavily loaded and things are running slower than expected, then you delay may not be long enough. And this works against you trying to send a timely notification – like when someone has just run the doorbell.

1 Like

I was replying to @CancunManny and his explanation about the need for input booleans to stop an automation from firing twice.

Understood, sorry for the confusion.

I had hoped with the recent enhancements to the automation/script machinery, there would have been something to address this lack of synchronization between steps. Perhaps some enhancement that used the script wait: action? But that probably requires a bunch of changes with the service calls, which just launch messages on the event bus - maybe some status response message back from the service when it completes that could be waited for?

see

I’ve been using timers to serialize recordings based on @AmazingGoose’s example for over a year now. I found that there are gaps between recordings because the timer must be larger than the recording duration (timer duration = 90 sec and camera record duration = 80 in the example).

The advice from @petro helped me to re-write my automation using the folder_watcher component and waiting for the closed event

I have 2 cameras and use an input_text for each one to save the name of the recording once the file is closed. This allows me to trigger other notification and playback automations when these variables change.

configuration.yml

folder_watcher:
  - folder: /config/www/cameras
    patterns:
      - '*.mp4'

input_text:
  latest_front_door:
    name: latest front door recording
  latest_garage:
    name: latest garage recording

A script with variables is used to handle recordings from both cameras. The folder_watcher event allows the script to finish as soon as the recording is complete.
scripts.yml

camera_record:
  description: Record from camera.  Wait until finished.  Save file name in latest_<source>
  mode: parallel
  max: 2 # 1 thread per camera
  variables:
    source: garage
    timestamp: "{{ as_timestamp(now()) }}"
    file_name: "{{ source }}-{{ timestamp | timestamp_custom('%Y%m%d-%H%M%S') }}.mp4"
    duration: 10
  sequence:
    - service: camera.record
      data_template:
        entity_id: "camera.{{ source }}"
        filename: "/config/www/cameras/{{ file_name }}"
        duration: "{{ duration }}"
        lookback: 5
    - wait_for_trigger:
        - platform: event
          event_type: folder_watcher
          event_data:
            event_type: closed
            file: "{{ file_name }}"
      timeout:
        seconds: "{{ duration + 10 }}"
      continue_on_timeout: false
    - service: input_text.set_value
      data_template:
        entity_id: "input_text.latest_{{ source }}"
        value: "{{ file_name }}"

I use one automation per camera to call the script with the appropriate variables.
EDIT: Removed script.turn_on so that the automation waits for completion.
automations.yml

- id: record_front_door
  alias: Record Front Door
  mode: single
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        name: person
        entity_id: image_processing.coral_front_door
  variables:
    timestamp: "{{ as_timestamp(now() if trigger.platform is none else trigger.event.time_fired) }}"
  action:
    - service: script.camera_record
      data_template:
        source: front_door
        timestamp: "{{ timestamp }}"

- id: record_garage
  alias: Record Garage
  mode: single
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        name: person
        entity_id: image_processing.coral_garage
  variables:
    timestamp: "{{ as_timestamp(now() if trigger.platform is none else trigger.event.time_fired) }}"
  action:
    - service: script.camera_record
      data_template:
        source: garage
        timestamp: "{{ timestamp }}"
6 Likes

@bt04, thanks for tagging me with your better solution, watching for files makes a lot of sense and making it event based rather than crudely fudged durations!

1 Like

I know this topic is a couple of year old but I just ran into this issue myself.

Interesting thing, aparently, the UI DOES know when the recording finishes.

I went into Developer Tools - Services and selected the Camera: Record service, filled out all the fields, 10 sec duration and 2 sec loopback and pressed the Call Service button. Around 10 seconds later, the Call Service button briefly changes to a green checkmark before changing back to the standard Call Service button.

So, if the UI can do it, why can’t we do it in an automation? And if we can, can someone tell us how???

Thanx.