Avoid the use of long-running timers. Restoring timers is something (in my opinion) that should not be done. The reason is that: What if a timer ends when it is offline? How long was it offline? Does it continue where it was? Or does it take into account the offline time?
There are many variables, that are hard to solve or solve in a way that pleases everybody.
So, how to solve this… right?
A better solution is to use a date and time helper (input_datetime in YAML) when a “longer-running timer” is needed.
The beauty of that thing is, you can set a date+time in it and next automate with that. For example, this will trigger a persistent notification when set date+time is reached:
automation:
- alias: Trigger when next is reached
trigger:
- platform: time
at: input_datetime.next
action:
- service: persistent_notification.create
data:
message: "Times up!"
To start this “timer” programmatically, use the input_datetime.set_datetime service to set the timer for the next trigger on the input_datetime.next entity.
After managing a fleet of a couple hundred IOS devices the last few years, i can tell you we had less dramas from user running chrome than we did safari. regardless of the underlying code
I am getting the same issue, and reported it (and see the reports) in the Jandy iAqualink pages of the HA community. It worked perfectly before the upgrade to 2021.6. What is dismaying is that I restored a snapshot of 2021.5.4 and still it is broken. I tried removing then reinstalling the integration, but it won’t let me do that.
An active timer, interrupted during its countdown by restarting Home Assistant, is restored when Home Assistant starts. It is restored with a new duration representing the remaining time left of its original duration minus however long Home Assistant was offline. If the timer’s duration expired while Home Assistant was offline, the timer is not restored on startup.
NOTE
Timers, and restoring them, should be used judiciously. I agree with frenck that they’re not a panacea, especially not for long intervals when better techniques are available. FWIW, I don’t use any long-duration timers.
But the answer was incomplete (it’s better to use maintained software than to patch your own non-maintained local version…unless you want to of course) and I didn’t know if you knew there was an alternative. So I pointed it out.
I did try adding a version number with no luck and had to roll back HA, but looking back I realized that I may have missed a comma. I’ll give it another try this weekend when I have some free time.
you mean major versions. minor versions/patches are more often then in the past (subjective feeling)
I’m this kind of guy who likes to keep sw up to date. But since Jan, I’m really struggling to chose reasonably stable version I would jump for. May’s version advertised as “stability…” seemed to be full of issues causing update less stable/reliable. This release obviously contain “a little bit of bugs” too.
I mean serious issues like one reported by Kenneth. I saw also serious issue with NodeRed.
Also I read there are magnitude of issues with custom integrations in case of which core development doesn’t care. You guys haven’t figured out yet that most of HA functionality and popularity comes from 3rd party add ons? Thus those should be taken into consideration when releasing new ver. Saying to the user “we disabled part of your home automation, go find original author to fix this” doesn’t sound friendly.
I admit that bugs are integral part of development. But their number exposed to regular users should be far far lower. I’m just wondering if there is someone up there who does evaluation of HA development quality and leads developers somehow. Or it’s just wild uncontrolled approach.
BTW it’s not a domain of free open source projects to be buggy. A lot of such projects are led in the way to be rock stable. Are we heading there or nobody cares?
It would help though if there were more beta testers during the beta cycle, its the same people every month, and only a handful. Hard to catch everything with limited resources
Regarding custom integrations, the changes in this release were put out in January in a blog post on the developers blog, with warnings in the users system that it would be a requirement in a future release for the last 3 months and to let the custom developers know
Maybe third party addons should be incl to internal testing then marked as not working for new HA version? I know it’s again more work for core dev, but it would help users to not break their installations unconsciously.
I would ask another question. In the past (before new versioning pattern), most recent patch version could have been treated as most stable. Does the same apply to todays development? I mean does most recent May’s version contain all issues brought by this ver fixed?
Problem with that is that most integrations require hardware and people beta testing won’t know unless they have that hardware. I am not sure how it can be accomplished correctly with so many integration/hardware combinations
though some responsibility also falls to custom integration Devs to maintain them and for the users of them. If you see a warnings in your logs, telling you to let the custom integration dev know, dont ignore it