Synchronize the on/off state of 2 entities

Also experiencing this issue after latest update

@Fotis_Kanellopoulos @smarthomelawyer Out of interest and if you have the time, I would be grateful if you could take a look at this topic: -

I have found that even if the automation does not trigger, I still get the Repair notification. I have raised this as an issue on github (linked at the bottom of the topic I copied in) but I can’t seem to explain myself very well because every response has been that this is expected behaviour when an entity is ‘unavailable’, which is understandable if the automation is triggered. However, I am receiving the notification even if the automation has not been triggered (there’s no trace and the logbook shows the last time the automation was triggered was well before the Repair notification). In this case, if the automation isn’t triggered, the state of the entity should not be passed through to the service call.

I’ve only posted the issue because it seems odd and is repeatable and others experienced the same issue. I rewrote the automation in the UI and haven’t had the same problem but this negates the point of using blueprints, so hopefully, if adchevrier adjusts the blueprint, it will fix things for you.

In the interim, if you look at the link I copied above, there are suggestions to adjust your blueprint (thanks Taras). It is possible that the adjustments to the trigger will work for you and prohibiting a trigger from ‘unavailable’ might be the solution. Personally, I tried this and it did not make any difference i.e. the original automation would trigger if the entity-state was unavailable (to be expected). However, sometimes the entity-state was not ‘unavailable’ and so the automation did not trigger but the notification still persists - after adjusting the trigger, the automation would never trigger when the entity was ‘unavailable’ and the Repair notification still pops up regardless (having cleared it every time).

Hopefully, just eliminating the state change from ‘unavailable’ will solve this problem for you. Whether you or adchevrier implement the change, it would be great if you could update the github issue, if you find a fix.

Thanks.

3 Likes

@AJStubbsy @smarthomelawyer
Someone proposed to me to add the “not_to” for states “unavailable” and “unknown” to triggers.
It seems right to me so that those 2 states do not synchronize.
So I have gone ahead and added those lines to the 2 blueprints that I use.
So far I have not come up again with the error but I will leave it there as a testing phase and wait if it comes up again.

Maybe you could try that also. :slight_smile:


2 Likes

I completely changed the trigger so it should have only triggered if the state changed from/to on/off but I continued to have the error, even though the automation trace suggested the automation had not triggered. Very strange but thank you for confirming you were able to resolve the issue.

Thanks! I’ll test it out now and see how it goes

Edit: This was not the answer. It worked intermittently that’s only because the “unavailable” status of the zwave device cleared. This message comes form the state of the sync device being “unavailable” which appears, in my case, to happen excessively since the recent update. Its not the automation or blueprints failing, its the devices lack of availability at the time of triggering.


Line 29 of “/config/blueprints/automation/adchevrier/synchronize-the-on-off-state-of-2-entities.yaml” should read

- service: 'homeassistant.turn_{{ trigger.to_state.state }}'

The blueprint doesn’t have the quotes and while prior to the update it worked anyway, the update isn’t excepting the mistake.

I added the quotes and was back up in seconds (after restart).

2 Likes

Have you restarted HA?

I added the quotes but I still get a Repair notification/error pop up, even though the automation hasn’t triggered.

I thought I was maybe getting a historical error but this time I deleted the old automation, changed the code, as you suggested, created a new automation and changed the name but I’m still getting the error. :man_shrugging:t2:

I’ve released a new version of my original blueprint that now allows the selection of any number of entities from any domain.

Link On/Off State of Multiple Devices v1.0.0

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled. View source code

With this blueprint you can now select any combination of lights, switches, or other devices which support the homeassistant.turn_on/homeassistant.turn_off service calls. This also addresses the repair errors on startup.

2 Likes

For whatever it’s worth, here’s how I’m handling the problems you’re seeing above:

- condition: template
  value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
- condition: template
  value_template: '{{ (trigger.to_state.state == "on") or (trigger.to_state.state == "off") }}'
- condition: template
  value_template: '{{trigger.to_state.context.parent_id is none or (trigger.to_state.context.id != this.context.id and trigger.to_state.context.parent_id != this.context.id) }}'

The first condition is in the existing blueprint, no surprises there.

The second condition makes sure that the only states we’re dealing with are “on” or “off”. Because I’m allowing entities from any domain, those entities might have any variety of state, or they might be “unknown” etc. Instead of trying to figure out every possible “bad” state using not_to, instead let’s just throw out anything that isn’t on or off. Nice and easy.

The third condition is ripped straight off from @hebus from this post. This condition prevents “self-triggering”. We’re making sure that the action which turned a given entity on or off wasn’t this automation itself. Say I select 5 devices, and then turn one off. The automation will pick up that trigger then turn the other 4 entities off. Doing so will then trigger this same automation 4 more times, as 4 of the entities have now all changed state. This condition will catch that happening and discard the duplicate triggers.

OK, I think I get what’s going on here.

So, we are getting the fixit that says " The automation “SB_Swt Set: GlryW_LRWscene Sync/Mirror” (automation.new_automation_3 ) has an action that calls an unknown service: homeassistant.turn_unavailable ."

“Unavailable” is actually what being returned to the template. Which means the issue isn’t that there’s a problem with the code, or with the blueprint, its that “unavailable”, AS A STATE is being returned at the time it throws the error. When I looked at my logs for my zwave devices it became obvious that they were “dead” ALOT more since the update. I think the issue isn’t with the automations or the blueprints, I think its something about the connectivity, and when the automation is trying to do its thing to the device, the device is “unavailable.” It explains why a restart might help or not. It explains why its intermittent.

My zwave network is just spastic literally since I updated. I soft reset, hard reset, reinterviewed… works for a while, then starts throwing unavailable or dead logs.

Anyway, I am intermediate at best with this thing, but I thought I should through it out there… you may be hunting for a fix that isn’t actually part of the problem.

1 Like

@Samedarkclouds you are spot-on to what’s happening here, but I’ll add that all devices are “unknown” on startup. As a result you’ll have this firing during startup while devices are being loaded, some of them are “unknown”, and you’ll see the results you’ve been getting. This will be true even on a healthy Z-Wave environment and for not-Z-Wave stuff too.

In my post above, the 2nd condition is the solution which handles this. tl;dr - ignore any state that isn’t on or off.

1 Like

Im getting this “error” also but my devices are Tasmota Wi-Fi connected.

With this Blueprint now everything works has expected :slight_smile:

Great one, love it, I would also make it possible to expand it to more than just 2, and sync luminosity level and color this would be a game changer. And not only for lights. :flushed: This is great

Luma,
this should not be needed and in my case generates faults I swith one of the triggers but the others do not sync.
In reality when the automation is run again for “auto trigger” it will find the triggers already in the new state so condition 1 will exit with no damage.
Am I wrong?

@luma I have an issue.
If I add the third condition… 30% of the time the devices do not sync
if I remove it on BIG groups like 8 switches I got flip flops… continues to swith of and on :frowning:

— update —

this seems to be stable:

condition:
- condition: template
  value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
- condition: template
  value_template: '{{ trigger.to_state.state != "unknown" }}'
- condition: template
  value_template: '{{ trigger.to_state.state != "unavailable" }}'
- condition: template
  value_template: '{{trigger.to_state.context.parent_id is none or (trigger.to_state.context.id != this.context.id and trigger.to_state.context.parent_id != this.context.id) }}'

1 Like

@luma I’ve put back all the conditions as above BUT sometimes it happens to me an infinite sequence of on/off.

Is a set of 8 sonoff switches connected via the automation.
Sometimes it happens to me that they start on/off very quickly and cannot stop it. I have to open home assistant, enter in the edit page of the automation, DISABLE the automation.
At that point after few secs everything is stable and I can put it on again.

I cannot see the traces as there are too many in too little time.
Is there any way I can undestand why this happens?

M

Yes,
a small check to see that the trigger.to_state.state in ('on','off')
would fix this

I’ve seen this too but I don’t know how to fix it yet. The problem is basically:

  • devices A and B are linked
  • turn A on and off quickly → sends B and on and an off command
  • B gets turned on - tells A to turn on
  • A is already turned off, so when it gets told to turn on it does - it also tells B to turn on
  • meanwhile, B is processing the command to turn off
  • hilarity ensues

The way we used to deal with this in the old days is that an event would come in with a timestamp that says when it was initiated. This is a little different from time_fired (which we do have), since an event that is triggered by another event would inherit the initiated time from the previous event. If a device already processed a later event it would ignore older ones. This means that the state object needs to store the initiated time, which is different from the last_updated and last_changed that we already have. To apply that to our example:

  • devices A and B are linked
  • turn A on and off quickly → sends B an On@t0 and Off@t1
  • B gets the On@t0 and turns itself on - sends A an On@t0
  • A is already turned off by this point with time of t1 (which > t0), so when it gets told to turn on with t0, it ignores it
  • meanwhile, B is processing the Off@t1, turns itself off - sends A and Off@t1
  • A sees t1 = t1 and either ignores it or checks if the states match to be safe, sees that they do, and then ignores it

Without that initiating timestamp in the states though, it’s much harder.

Extra credit: turn A on and B off at the same time.

1 Like

I’m not sure this will handle all cases, but changing the mode from queued to restart seems to have fixed the problem for me. It helps that the event handling is centralized here, I suppose.