Hey guys. So I am running EPShome for some time now and I love it. I find updating entities tedious however (now that I have multiple it feels like a chore). I do use the update all button usually, but is there any official way to auto-update all entities? A yaml config perhaps?
I found this, but I am not sure if this is even it or how to setup properly.
Auto-update entities ? What do you mean ? are you speaking about ESPHome firmware ? if so it’s completely useless to update some ESP that are working fine, you just take risk to destroy some of them as Flash in it doesn’t have unlimited rewrite cycles
Hmm, never thought about that. Thank yoo. I’ll look into it how much of an impact this is in practice.
I have a few reasons tho:
I feel like if I update, the yaml stays up-to-date in increments. I flashed a new esp and multiple things were different (like the recent changes to ota).
I like to keep the yaml configs more or less similar between devices, If I would not update I would have multiple yaml configs already and would get lost maintaining different layout.
Also the HA is pushing updates all the time. Defeats the purpose if I would just start skipping them.
I see your “don’t fix if not broken” point though.
Provided you use the addon, you will have update entities. You can check their states to see if there’s an update for them. I would use either tags or a group to signal the ones you want to do with the call, to avoid installing updates you did not intend to.
My guess is though the update.install service will check as well, so you don’t need to:
If you do decide to continue: you’re not grouping the esp devices themselves, but the update entities.
But the simplest is just to list them in the service call, like in the example in the docs. Many roads that lead to Rome…
Good discussion. I never thought about the limited number of lifetime writes. I wonder what it would take to reach that limit.
I already skip over the .0 versions, and generally only update no more than once a month. I wouldn’t want to push the update to all my devices, since I want to test each one. If I had more I’d probably only test one of each type, and assume any similar devices would be OK.
But I also wouldn’t want any of my devices to be left behind for too long. That seems like a setup for a future headache when (not if) I miss the significance of some change. And it just seems wrong to have the YAML source code not match the installed firmware.
Back to the original question though, I do my ESPHome updates from the command line on my laptop. This avoids the load on HA and, if I wanted to I could create a batch job or script to update them all, or in groups.
One final word: Cloudflare. Do you really want to batch-update everything at once?
I do not decide that programmatically.
I read the release notes and decide if only some, all or none should be updated.
My flows just makes the none and all options easy to do.
I did run TrueNAS on a sd card once. It lasted for about two years before it died. But TrueNAS was writing and reading constantly from the card (in terms of every second). Not sure how resilient the esp flash are.
Thats interesting. thank you!
What do you mean by Cloudflare? Is there a limit? I actually have a FQDN via Cloudflare for my HA. I push the updates via OTA though.
Gotcha. This seems like a good idea, might fool around some with that.
Also, might stop updating all devices all the time, if I don’t have update requests in HA. This way I could just manually revisit and update if needed once every few months. Thanks!
The numbers for this are supposedly in the 100000 minimim range. Even a weekly update gets you over 1800yrs. This is likely not an issue.
I would not update weekly. I also would not update unless some added function is needed. I was once told by someone “we make update to fix problem but it always affect to other areas and may cause different problem.”
Be careful with the numbers here.
The flash memory used in many of boards state a MTBF for only the first sector of the flash and that MTBF is with a huge spread.
Since a single sector just need to fail for a flash operation to be impossible, then the overall MTBF value for the entire flash will drop dramatically and ESP chips do not have wear leveling or bad block correction, like many other flash products to prevent a single failed sector from being a fatal one.
The lower MTBF is often said to be somewhere around 1000 erase operations and still with a lot of spread, so some might die after only 100 writes and some first after maybe 2000 writes.
And because there is always a chance that one of your devices might die on the next flashing, then it is an unnecessary chance to take in some peoples opinion.
If it is just a standard ESP board, then it is easily replaceable, but if it is a specific vendor device, like a smart plug or a water sensor, then the device might not be possible to reacquire again, because it has gone out of production or maybe other non-ESP chips have been used in later models.
If you do decide to update, having to do many, often the same, devices one by one is tedious.
So creating an automation to update several similar devices in one go is useful, as long as you do not consider it a thing you should use for every single ESPHome update that comes along.
Same goes for skipping. I have two Zigbee devices that report an update every couple of days. I ignore them every time because due to a versioning bug, it is the firmware I already have and it refuses to update. But it keeps popping up. Automating skipping that is a blessing, because the notification hurts on my OCD.
EDIT: Yes, I meant Cloudstrike. Sorry for the confusion!
Sorry, that was a (somewhat snarky) reference to the massive problems caused by IT departments all over the world which let Cloudstrike auto-update their Windows devices without in-house testing and verification, first.
I’m a bit old-school, having worked in environments where such a thing would never have been considered, much less allowed. In my career, updates went through rigorous testing, and not all of them were applied. It boggles my mind how anyone supporting critical infrastructure allowed those auto-updates.
The analogy here is updating our ESPHome (and HA) devices without a rigorous research and testing process. To me HA is sort of “critical infrastructure.” Maybe not as critical as airline or hospital surgery scheduling, but I still wouldn’t let someone else to decide what and when to update.
I guess it is Crowdstrike he is referring to and it was not a Windows update that did the outage, but a CrowdStrike update of their malware software.
It was just only the Windows version of the CrowdStrike software that had a bug and Microsoft stepped in to try to help.
The high reliance on a single software package across many critical digital services have since been discussed, but CrowdStrike is just currently delivering the best service of its kind, so if society should be less reliant on a single software package, then who is the ones that have to go with an intferior product?
I guess for me the issue wasn’t the dependency on this one service, but the way IT departments all over the world (including one where people I know work) allowed the vendor to make an automatic update - to all machines at once - without adequate testing. And it sort of ties in to the idea of updating all of our ESPHome devices at once. Hopefully with less severe consequences, but still.
Well, with CrowdStrike it might be beneficial to apply the updates right away and take the chance.
If the updates fail, then you have some machines to reinstall.
If the updates is delayed and the security then fails, then you might have a security breach of unimaginable dimensions.