I think I’ll completely delete that automation and blueprint and rewrite the lot, using the trigger you’ve suggested, just to be sure there’s not something being stored in memory somewhere.
Thank you for looking in to this, I really appreciate it.
So after making the change I suggested, the Repair feature reported the same error after startup.
Go to the automation’s trace (the one that was produced on startup) and check two things:
Confirm the code that was executed contained the modified State Triggers.
Check the value of trigger.to_state.state (i.e. is it unavailable).
If there’s no trace produced for the automation on startup then I don’t know why the Repair feature is complaining (and I feel it would justify reporting it as a bug).
The very last trace for this automation was triggered well before my last restart and it didn’t get to the action because the two entities already had the same state i.e. it was triggered because the input_boolean state had been changed to match the binary_sensor state.
Once again, thanks for your help. I’ll look at reporting it as a bug but I’ll see if I can stop the behaviour first by starting fresh and then I’ll be in a better position to see if it’s more related to the actual blueprint/automation or a quirk in my system.
if you create an issue on github can you post it here so people can subscribe?
I’m leaning toward “{{ trigger.to_state.state }}” not being defined because it’s not actually being triggered when it runs its check. I’m doubtful that any changes to the trigger will make a difference.
@cwhits@nullex Here’s the issue I have raised regarding the unexpected behaviour.
I can confirm that rewriting the automation in the UI rather than using the blueprint, does stop the behaviour but I appreciate that negates the point of having blueprints in the first place, I just wanted to check.
This was not the answer. It worked intermittently that’s only because the “unavailable” status of the zwave device cleared. This message comes form the state of the sync device being “unavailable” which appears, in my case, to happen excessively since the recent update. Its not the automation or blueprints failing, its the devices lack of availability at the time of triggering. The fixit is likely because the devices unavialbality status was in play at startup. I really think the update destabilized zwave, or at least zwave2mqqt, in a notable way. I am getting far more dead or unavailable devices and its making it seem like automations are broken.
Have you tried changing the trigger for your automation, or including a condition, so it doesn’t trigger on a state change including ‘unavailable/unkown’, as Taras suggested earlier in the thread? My particular issue, is that I’m getting the error when the automation has not triggered. In your case, if you have received the error once your automation has triggered, then that is the behaviour you should expect.
I’m experiencing the same error, also using an ESPhome device as well as an Alexa virtual switch, I tried a new version of the Blueprint (different author- Link On/Off State of Multiple Devices) and that doesn’t work at all so I’m just going to put up with the error since it doesn’t affect the operation of the blueprint.
I have found a solution, not so fancy but is working.
Two automation with trigger homeassistant that stop the automation when home assistant is shuting down and turn it on when home assistant is turned on.
For now is working no more repair warning
I have not changed these in months, but quite recently (some weeks ago?) HA started to complain about them after a restart.
I noticed this because suddenly these two lamps (and only these two) was turned on after a restart of HassIO (e.g. after an update), and this hasn’t happened before.
As @AJStubbsy suggested, it is as if the automation is being “verified” during startup, and the to_state.state is then “unavailable” - and is therefore reported as an error - without the automation being triggered (in my case, the automation is only triggered by a light switch or a remote control).
After startup the automations seems to work ok - so there is probably nothing wrong with the automation or action, but something has changed/been added that checks/verifies the automations during startup and reports any errors found.
I will try the suggested fix by @gepetto to turn off these two automations during shutdown and then on again with a short delay after the system is up and running again.
Have you checked the automation trace to verify that the automation has not triggered? If yes, there’s an open issue on github that I posted above, it could help to add your experience to that.
If you do discover that the automation is triggering, these conditions that I’ve seen posted, might help resolve your problem: -
How to check the automation trace?
At least I can see that these two automation has not been triggered (last time was yesterday for one of them, and two hours ago for the other one) when I check the automation in the list of all automation - but the system was restarted just a minute ago and the error was again reported (other automation that are triggered by the restart are updated in the list - but the two “failing” automation are neither triggered or updated).
I can update the issue on Github with my findings as well.
Edit: I remember that I have recently installed the Watchman integration and thought that it might cause the problem - but disabling it didn’t help - the issue is still reported at restart.
You are experiencing exactly the same problem I was (I’m not imagining it!! ). I was also checking the last triggered time from the automation list but the trace is a great way to check variables and conditions through your automations. You can access it from the 3-dots, menu, next to the automation: -
‘Traces’ stores information from the last couple of automations that were triggered and is a great way to debug an automation. You can use the up and down arrows, next to a trace, to skip from Trigger, through Conditions and finally Actions, to see the flow of an automation. Obviously, you won’t be able to see automations that have not triggered and so this was a means for me to be confident that my automation had not triggered and is - being “verified” during startup, and the to_state.state is then “unavailable” - and is therefore reported as an error - without the automation being triggered
Thank you for taking the time to include your experiences. Maybe someone will read this and/or the github issue and spot a solution. ATB
For anyone stumbling across this post. I’ve closed the github issue now. It seems that for this particular error, the automation does not log the fact that it has been triggered but instead only raises the error. Personally, I’d prefer the automation is shown as triggered because then you can see that the ‘unavailable’ state has been passed through to the action.
The ‘unavailable’ state is passed through if the desired trigger states are in single quotes: -
to:
- 'off'
- 'on'
but changing to double quotes seems to fix the problem: -
to:
- "off"
- "on"
This should stop the error from occurring.
Thanks to those who helped to find a solution and explain the behaviour.
So I have the exact same problem I’ve been trying to fix for a while now. Using the blueprint of sync states. Obviously, when HA restarts, the automation is trying to run when not everything is in a ready state and causing this error. Is there a way to delay the start of this blueprint running upon startup?
I see the above post on how to “fix it”, but I can’t find anything in the blueprint that has a “on” or “off” on it. Can someone post a corrected blueprint that fixes this issue? I’m a bit lost when it get’s this deep into the woods.
One of the devices is an ESPhome device. I see someone else above is also using ESPhome and getting the error, so maybe related.