Did it work to restore with 2023.01 or did you have to go back to 2022.12.08?
Same for me as well - I have been trying to hold out for another update to potentially fix it but my whole z-wave network is so delayed it might as well be down. I’m also seeing 10-15 seconds now to load the ZWaveJS UI logs in the add-in where it was almost instantaneous previously, running HA Blue.
Looks like I’m going to have to go the restore route.
All new integrations start with a single platform, right now only the camera platform is implemented
I had to go back to 2022.12.08.
I have zero errors/warnings showing in 2022.12.08.
Yes, not implemented currently. Will be added in a future release, see the description of this PR: https://github.com/home-assistant/core/pull/84081
Just tried to update to 2023.1.0 and now HA wont start.
pi@PiHome:/etc/systemd/system $ sudo systemctl status [email protected]
● [email protected] - Home Assistant
Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2023-01-05 18:20:44 GMT; 3s ago
Process: 26951 ExecStart=/srv/homeassistant/bin/hass -c /home/homeassistant/.homeassistant (code=exited, status=1/FAILURE)
Main PID: 26951 (code=exited, status=1/FAILURE)
CPU: 2.321s
searches online return nothing related to a status1 failure with homeassistant. Downgrading to 2022.12.8 which is what i was on before and it starts fine.
Edit: Running on an RPi4, Bullseye & Python 3.10 in a Virtual Environment
Very cool to see the new PurpleAir integration!
I don’t see sensors for Air Quality Index (AQI) and AQI risk level (e.g. Moderate, Unhealthy), which are the primary data points I consume using my REST-fed template sensors today. I think I am not alone?
Yes, we could keep doing the template sensors for AQI, but it seems obvious to just have them built-in to the integration. Should I file a feature request?
Hi, same issue for me. Supervised install on raspi 4
Nothing in logs, but there is latency times to times. It can go to 5-6s to start a ligth. Commands look staying in a queue, so if I push 3 times on a button, it can start stop start 4-5 seconds after first push.
I have issue with ZigBee device and knx ones.
I check, processor load looks high (40-50%) all the time.
Not many add-ons. And there are all below 1% of cpu usage except node red which is at 10%
I am trying to figure out, but cannot find a reason.
The speedtest integration creates 3 entities.
does the linked automation need to be run on every one of the included entities to get them to update or does running the “update_entity” service call on one of them trigger the integration to update all of them?
Do you happen to know the answer or do I need to ask CentralCommand in the linked thread?
Right now I have the integration running every 4 hours and would like to keep it that way.
Same for me. Supervised install on raspi 4. The logs show tons of warnings about integrations taking a long time to start. Tons of lag. It took forever to bring up the backups page but I was able to restore to 2022.12.7. Everything is fine again…
I assume that is a custom integration. Ask the author on their github.
It should update all 3 in this case.
That being said, that is not a one size fits all answer, each integration handles how it updates entities differently. For example of calling update entity on a template sensor does not cause the template integration update every template entity, just that one is recalculated.
I noticed that too and agree. AQI should be included. There are multiple standards as to how that is defined which will need to be dealt with.
Device integrations aren’t allowed to generate entities from data that doesn’t come directly from the device. In PurpleAir’s case, the API only returns raw particulate values.
That said, there’s a way to get what you want; check out my post here: PurpleAir Air Quality Sensor - #189 by bachya
Thanks for the info.
That definitely seems to make this more complicated. It seems that we need to now dig into each integration source code and figure out how it handles updates?
Or do you know of a better way to know what is needed to be done?
I mean tbh the easiest way is to simply try it. Use dev tools and call update entity on one of them and see if the others update or not.
You can also generally see how integrations work in this regard from history. If a clump of entities from a single integration all change state at the exact same time or not at all in history then their update mechanism is linked. The service is almost certainly making one API call and setting the state of those entities from the response.
But if it is critical that you know exactly what updates when for a particular integration then yes, looking at the source code is most likely the only option right now. The docs are open source with an edit button at the bottom though so feel free to add this info if you think people will find it useful.
I would say as a user you should not be concerned about which entities get updated alongside others. Just use the service call provided and create your own scan interval using the time_pattern
trigger. That will literally do the same thing as scan interval, whats nice about this is that you as a user get more control over how often things can update which can indeed vary from user to user. Me for example. I disabled speed test hourly updates and I only update 4 times a day and only when nobody is streaming via plex and downloader is idle, otherwise I dont need it hogging up the bandwidth every hour. To me thats the beauty of this update feature, better control of when updates happen to my own preferences.
@zapp7 @Frederic_ESH @WoolCardigan @NateMathisen
I made a github issue. If you have additional logs or comments. Please share making a comment.
Right but you could already do that the way that it was before.
now instead of just changing a simple entry in an integration configuration option for how often you want it to update you have to completely disable polling on the integration and create an automation to do it even if you literally only want to change how often (based on time only) and there is no other specialized requirements. Along with investigating how the current polling is done (by entity or by integration) so you know what the automation to update things entails.
and if you do have those special requirements then you already had the ability to do that even before the latest update.
The change just seemed to make things harder with no clear benefit.
as far as not being concerned which entities need updated alongside others…
then how will you know which entities to update?
Should we just assume that all of them need forced updates and enter all of the entities into the automation?
That sounds like a horrible misuse of resources.