After 30 seconds, the states of some of entities that got messed up due to the start up should should have ‘stabilised’ and when you activate the automations again, they will no longer trigger unwanted automations to start (because an automation only starts at the moment of the state change). You can play with the 30 seconds of course if you need it to be longer or if it can be shorter. Also, you could specify the exact automations you want to exclude…
Hey - thanks for the suggestion @liquidox. This looks like it will solve my problem (same as others). But in practice, I am finding that home assistant loads this automation quite a bit later than the ones I want to inhibit at restart, so the problem remains unsolved.
I can’t find anything about how to determine which automations will load first. Looking at the logbook, it doesn’t appear to be alphabetical (either by automation name or by yaml file name).
Any thoughts? Any way to force the delayautomationrestart automation to load before all the others?
And I agree - this should be default. Don’t allow automations to fire until all the platforms have loaded / settled / whatever. Too much unpredictable behaviour as it is.
Personally I have “fixed” it by moving all my automations over to AppDaemon, obviously not a solution for most people, but worked out for me. And now have all the power of AppDaemon, which is amazing.
I only have 1 automation that would go nuts when restarting Home Assistant: an automation which sends me a notification whenever certain devices in my network would go offline or come back online. Whenever I restart Home Assistant, that automation would send me a notification for every device stating it came back online (and sometimes even that it had gone offline first). So with 6 devices being monitored by that automation, I would get either 6 or 12 notifications on my phone.
I work around that by checking how long Home Assistant has been up and running: if it’s less than 5 minutes, then don’t trigger the automation. For this, I use a sensor:
- platform: uptime
name: "HA runtime in minutes"
unit_of_measurement: minutes
and then the following condition in said automation:
My problem isn’t quite the same but maybe you guys have figured this out while figuring out the problem in this topic. So I’ll borrow the topic a bit - sorry about that
I have an automation that sends me a notification (ios) if a device becomes unavailable. This is mainly to monitor whether any of my many wireless Zigbee-sensors run out of battery. Now every time I restart HA, I get zillion notifications, because apparently HA considers the sensors being unavailable right before it shuts itself down. I tested that it happens in shutdown, not on start.
I doubt that there is a way (or at least it won’t be quick enough) to for example turn the automation off if HA receives a restart command? And then set the initial_state to ‘on’ so that it will come back on after start?
My suggestion would be to create a script that first disables the automation to send a notification and shuts down homeassistant afterwards. Do bear in mind: it needs to be a script as that can do consecutive actions.
If the problem is that you get a lot of reminders, then you can put a timer in place and use that as a condition (do not fire the notification if previous notification has been sent less than 5 mins ago)
Main drawback is that you always need to shutdown via that script.
Thanks! Actually this was so stupid simple that I almost feel embarrassed I just added for: 00:04:00 to the original automation. This solves the problem completely. Below 4 minutes would probably also be more than fine but that works too for this purpose.
Also running into this issue as well. Automation will immediately trigger, including as soon as the automation is turned on if using initial_state: false.
Hi all, sorry for digging this up, but I tried to use this solution and I get errors like this: In ‘numeric_state’: In ‘numeric_state’ condition: entity sensor.uptime state ‘2022-07-18T10:25:40+00:00’ cannot be processed as a number
The sensor was set up as described. Anyone else having this problem?
The uptime sensor back then was a duration in minutes that the server has been alive. The uptime sensor now is a time stamp when the server started. His post won’t work with that sensor.