I have an automation to send pictures from a camera when the door opens.
The door has two door sensors -> The automation has two triggers.
When I open the door I only want to run the automation onces, but both triggers get true at roughly the same time and I can’t prevent the automation to trigger just once per door opening.
I tried to a condotion but because the two triggers trigger at the same time it has no effect.
Are there any other possibilities to avoid triggering it twice for each opening of the door?
Thats my automation:
- alias: notify_1_dafang_1_send_picture_door_open
initial_state: true
hide_entity: false
trigger:
- platform: state
entity_id: sensor.front_door_node_1_front_door_status
from: 'closed'
to: 'open'
- platform: state
entity_id: binary_sensor.aqara_door_window_sensor_1
from: 'off'
to: 'on'
condition:
condition: and
conditions:
- condition: template
value_template: '{{ (as_timestamp(now()) | int - as_timestamp(states.automation.notify_1_dafang_1_send_picture_door_open.attributes.last_triggered) | default(0) | int) > 5 }}'
action:
- service: shell_command.dafang_1_get_snapshot_door_open
- service: notify.notify_1
data_template:
message: "Front Door Snapshot 1 - 5 from Door open"
data:
photo:
- file: '/config/www/dafang_1/dafang_1_snapshot_door_open_1.jpg'
caption: "
*Open* Front Door Snapshot 1
{{now().strftime('%Y-%m-%d %H:%M:%S')}}"
I want to use both as a trigger because if one of them don’t report the state I have a second device which can.
I want to avoid a template sensor if possible.
I had exactly the same situation. What I did was to use an input_boolean intermediary and two additional automations. Basically, whenever either of the sensors goes to “indicating” (e.g., open or on), that turns on the input_boolean. Then whenever either of the sensors goes to “not indicating” (e.g., closed or off), that turns off the input_boolean. Then I use the to: 'on' state of the input_boolean to trigger the original automation.
If you think this will work for you and you need a concrete example, let me know and I can share exactly what I did.
Actually I did that because my situation was a bit more complicated. I won’t go into that, but I think you can solve your problem a simpler way by replacing the triggers in your current automation with a template trigger. Something like this:
A template trigger will trigger the first time it evaluates to true. After that it won’t trigger again until it first evaluates to false, and then to true again.
Unfortunately that won’t always work. It actually takes some time after an automation triggers before its state gets updated in the state machine. If the two triggers happen very close together, it’s very possible the second one will happen before that condition becomes false.
I have been using this for a few months but with “last_changed” and a group as the entity… to prevent double notifications… it hasn’t failed once yet, maybe I have been lucky…
EDIT: it is a lock and a door sensor… so it gets unlocked then opened pretty much immediately. This prevents me from getting a notification that the door was opened when I just received a notification of who’s key unlocked it… because if someone unlocks the door with their code it’s pretty much a given that it’s about to open.
@pnbruckner - I will have to remember the input boolean thing as now that I talked about it not failing… I’m sure it will start to…
I use that kind of condition to keep an automation from firing again for a while, too. But it works in that scenario, and probably in yours, because of the minimum amount of time possible between triggers. If, due to the nature of the sensors, it’s impossible to get multiple triggers within a few hundred milliseconds, say, then you’re probably ok. But if you have two sensors that, by their nature, fire extremely closely together, then this condition technique won’t work. Believe me, I’m telling you from first hand experience. In my case it was a PIR sensor that indicated motion via both a binary_sensor entity and a sensor entity. They usually both come within milliseconds of each other. (Although sometimes only one comes, and sometimes one gets “stuck”. But that’s another story! )
Um, I hate to belabor the point, but this won’t solve the basic problem either, because it’s still vulnerable to the same problem – states in the state machine take a small, but not insignificant, amount of time to update, so if a second trigger comes in before the state of the timer is updated, …
This raises an interesting question. If a second trigger occurs while the automation is still processing the first trigger, what happens?
Which one is true:
The first execution of the automation runs until completion before the second execution commences. In other words, you cannot have multiple concurrent executions of the same automation.
The second execution of the automation can occur while the first one is still processing. In other words, you can have multiple concurrent executions of the same automation.
I assume the answer is 1 and only because it’s my understanding you can’t execute a script while it’s already executing (i.e. making an assumption the same reasoning applies to automations).
So if it’s 1, the first execution will run until completion before the second trigger causes it to be executed again. In other words, the automation’s first execution processes its condition and actionbefore the second execution of the same automation is allowed to start.
On the other hand, if it’s 2, then we have a free-for-all allowing for near-concurrent executions of the same automation. That seems like a fertile environment for race conditions.
Well, I won’t say that I fully understand it to the point of being able to conclusively answer your questions , but from what I’ve read and experienced, I believe the answer is almost#1. I say almost because I think it depends if the automation’s action contains any delays or wait_templates. If it doesn’t, then I think #1 is correct. But if it does, then I think what happens on the second trigger is the pending delay or wait_template step is effectively aborted and the action picks up at the next step. (I say this because I think events are processed in order, and all automations that are “triggered” by a particular event – and by that I mean it actually causes the automation action(s) to run – are also processed one by one. But if an automation’s action contains a delay or wait_template that actually needs to wait, then that action “script” is effectively suspended until the delay completes, or another event causes the wait_template to be re-evaluated, and then the next triggered automation is processed. So when the second trigger comes in, the action “script” is resumed, effectively canceling the delay or wait_template.)
Anyway, all of this is somewhat besides the main point here. Even if the action is “atomic”, there still is a delay between the automation being triggered and its last_triggered attribute being updated in the state machine, or between a service being called, such as starting a timer, and the state of that entity being updated in the state machine. So if a second trigger comes in before that update in the state machine completes, then the automation’s condition still won’t see it because it gets evaluated too soon.
I believe I understand your point that initiating a service, or updating an attribute’s state, takes a finite amount of time. Your position is that the second execution of the same automation may begin before this finite time has passed. In other words, the second run is underway while the first run’s actions are still pending.
The crux of my point is that the first execution’s actions are in the pipeline before the automation is allowed to be executed again. When the second execution occurs, it actions will also be enqueued in the pipeline and they’ll be behind any remaining from the first execution.
Unless the state machine has parallel pipelines, the second run’s actions will be behind the first run’s actions.
Anyway, it’s a theory based on my gut and no actual inspection of the state machine’s source code.
Well, not really. I’m not saying actions are still pending. I’m saying the effect of some actions can take a while. Specifically, changing an entity happens right away (e.g., starting a script makes the script start running), but reflecting that change in the state machine (e.g., changing the state of the script entity from ‘off’ to ‘on’) can take a while (because the update is scheduled to happen when it can, not immediately.)
So, e.g., if you have this in your automation action or in a script (say you want to run the script twice for some reason):
it may not work the way you expect because the first step starts the script running, so its internal state knows it’s running. But reflecting that state in the state machine takes some time, so when the next step (the wait_template) is executed, chances are that it will complete immediately (because it still sees the old information that says the script is not running.) Then the third step runs and tries to start the script again, but it’s already running, so you’ll get an error about not being able to start the script because it’s already running.
The point is that conditions don’t test entities directly. They test the representation of their state in the state machine, and it takes time after calling a service to change an entity before its state in the state machine is updated. Same thing with triggering an automation. Even though the actions start running right away, the last_triggered attribute of the representation of the automation’s state in the state machine takes a while to be updated. So getting back to what we’ve been discussing – i.e. how to prevent two triggers, coming very close in time, from causing the automation’s actions to run twice. You can’t count on anything you put in the condition to do that, because it, by design, only tests states as represented in the state machine, not the entities directly.
I don’t think that’s quite the correct way to look at it. When the automation’s actions run from the first trigger, they will complete (even if the effects of those actions are delayed) before they will run again from a second trigger event.
I grant you that the example you provided, a sequence within an action that is effectively self-referential, can be problematic (very narrow time frame). However, the original example was two consecutive executions of the same automation (comparatively wider time frame). The first run finishes before the second one begins. The contentious issue appears to be what actions (if any) get truly “finished” (committed by the state engine) before the second run begins.
Ideally, the automation’s first run sets the timer’s state to ‘active’, and the state machine commits this state-change, before the second run checks the timer’s state.
Your position is that the state machine might not commit the timer’s state-change before the second run checks it. Therefore the second run would not see the new ‘active’ state but the existing ‘idle’ state.
I can see how this is possible. Yet I wonder if it’s probable given the orders of time involved. If the state-machine is executing in microseconds, two consecutive sensor readings, even as low as a millisecond apart, are still widely-spaced from the state-machine’s perspective.
I believe it’s improbable but not impossible. For example, your self-referential example substantially increases the probability of outpacing the state-machine.
I can tell you from direct experience that it is indeed not only possible, but in some cases (such as we have been discussing) probable. As I explained above, I had this exact same problem, where two triggers came so close together that the automation’s last_triggered attribute did not change fast enough to prevent the second trigger from running the actions.
Yes, the CPU is running very fast. But you have to consider the architecture of the HA Python code. The order things are done it can cause hundreds of milliseconds of time to elapse between the time the automation’s actions run, and the time the automation’s state is updated. That is more than enough time.
The other thing to consider is the hardware HA is running on. In my case, it’s a RPi3B. If I had it running on a fast Intel NUC, then the probability of seeing this behavior might have been low enough that I didn’t experience it.
Still, understanding how the software works, and designing your automations accordingly, can help prevent odd things happening that can be difficult to resolve.
I defer to your experience with Home Assistant and concede to the fact an interpreted language might undermine my ‘order of time’ argument.
FWIW, the first open-source, home automation software I used (back in 2006) was MisterHouse. Based on perl, it did a serviceable job but was hardly a ‘microsecond’ beast, given the power of the PC I used at the time (an Intel Pentium III @733 MHz, rated at about ~200 on the Passmark scale). I switched to Premise (written in C++) and it flew on the same hardware. I later moved it to a less power-hungry machine, running an even pokier Via C3 processor, and it remained serviceable.
I realize this is not exactly a fair comparison because their respective software architectures were dramatically different. Nevertheless I may have to recalibrate my expectations of what to expect from Home Assistant when external events come at it fast and furious.
That’s effectively what I suggested back in post #6, but without the need for the intermediate sensor. Of course, it might be nice to have the template binary sensor, too, for other purposes.
BTW, this:
value_template: >
{% if is_state('sensor.front_door_node_1_front_door_status', 'open') or is_state('binary_sensor.aqara_door_window_sensor_1', 'on') %}
true
{% else %}
false
{% endif %}
can be simplified to this:
value_template: >
{{ is_state('sensor.front_door_node_1_front_door_status', 'open') or
is_state('binary_sensor.aqara_door_window_sensor_1', 'on') }}
This template already evaluates to true or false, so no need for the if-else statement that turns true into true and false into false.
Also, you don’t need to specify the entities via the entity_id parameter; they will be automatically discovered from the value_template.