7.1 broke the energy panel cant see history
7.2 the gui loads on the app but in the web gui it unusable slow loading
radio-browser seems to be broken. it was stuttering on release 2024.7.0, Then got worse with 7.1 and started throwing an error about quick_play. Now the log fires the quick_play error on every call. DNLA server works, but none of my radio-browser buttons work any longer. I tried to roll back to a 6.4, but lost too much history, so moving back to 7.1 as soon as the tables are rebuilt. I will post a sample log after.
Logger: homeassistant.components.websocket_api.http.connection
Source: components/websocket_api/commands.py:241
integration: Home Assistant WebSocket API (documentation, issues)
First occurred: 6:03:57 PM (2 occurrences)
Last logged: 6:04:00 PM
[547716138176] Unexpected exception
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/cast/media_player.py", line 100, in wrapper
return_value = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/cast/media_player.py", line 508, in _quick_play
quick_play(self._get_chromecast(), app_name, data)
File "/usr/local/lib/python3.12/site-packages/pychromecast/quick_play.py", line 97, in quick_play
controller.quick_play(**data, timeout=timeout)
File "/usr/local/lib/python3.12/site-packages/pychromecast/controllers/media.py", line 559, in quick_play
response_handler.wait_response()
File "/usr/local/lib/python3.12/site-packages/pychromecast/response_handler.py", line 54, in wait_response
raise RequestTimeout(self._request, self._timeout)
pychromecast.error.RequestTimeout: Execution of quick play http://playerservices.streamtheworld.com/api/livestream-redirect/KKJZFMAAC.aac timed out after 30.0 s.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 241, in handle_call_service
response = await hass.services.async_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2731, in async_call
response_data = await coro
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2774, in _execute_service
return await target(service_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 999, in entity_service_call
single_response = await _handle_entity_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 1071, in _handle_entity_call
result = await task
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/cast/media_player.py", line 777, in async_play_media
await self.hass.async_add_executor_job(
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/cast/media_player.py", line 102, in wrapper
raise HomeAssistantError(
homeassistant.exceptions.HomeAssistantError: CastMediaPlayerEntity._quick_play Failed: Execution of quick play http://playerservices.streamtheworld.com/api/livestream-redirect/KKJZFMAAC.aac timed out after 30.0 s.
Starting to look like it is something to do with radio-browser to the cast integration… I can listen to radio-browser from the media navigation item in the browser as soon as I change the cast to my AVR it errors and states “nothing playing.”
So happy to see home area in the UI.
It will be even better when we can draw a box for the boundaries. In the US, anyway, our properties are usually a rectangle or at least a polygon. In my case it is 4 times as long on two sides as it is on the other two. I enjoy walking around and …
Anyway - July is best update in a long while by my opinion. Great stuff
now apart from the issues I have with the recorder since 2024.7 I am also having issues with my Places integration since 2024.7.2
This is getting really dramatic if you ask me… I have never had big issues like this in years and now multiple things are breaking down with this latest release (including HA stuff itself like the recorder…)
Both those issues are related. If you’d read the rest of the thread, you would have known that places is one of three third party integrations which caused the recorder issue. Let me make it easy for you: this thread explains everything, including a post by frenck where he submitted a fix for places which is waiting to be released.
To be fair I do also share the opinion this release has been particularly problematic - qbitorrent has an issue opened (by me and now confirmed by another) as well for example. I’m trying each new version (I’m doing my part!) of course but I’m not yet willing to move to .7 so am reverting once I’ve tested to my satisfaction.
Glad to see the recorder issue has been moved up to this cycle and not just for .8.
It does seem that way. I have yet to try it as I’m very worried about trying it. HA is at the core of managing my home. I hope the developers are learning from this and are putting processes in place to avoid a repeat. It’s hard to imagine how this was tested sufficiently.
Did it work ok for you in beta?
In my old programming days, beta testing was done in house. We’d have various setups representing likely configurations. We never asked our clients to do beta testing for us. My HA system is very much a live system. If I were to do some beta testing on my live system, I could suffer from it, potentially costing me money and serious inconvenience. I’m afraid I’m not prepared to risk that.
Ok, sorry to suggest you might be inconvenienced by development, or by helping. Silly me.
Yadiyadiyadiya! Comments like yours are about as useful as an ashtray on a motorbike! I have made clear why being a beta tester is, in my opinion and for my situation, not appropriate. A bit of understanding of my situation, which I imagine is far from unique, might actually help to find a better way of testing releases.
Remember many of the developers are full time paid employees. The number of employees seems to have increased recently. Maybe having a dedicated test team would be a way forward.
If HA is not to be relied on as a production-level product, we should be told so we can go and find a more suitable alternative.
Play nice please.
One tester should be sufficient for the start. Setting up a representative test lab at that persons home would be the real challenge. Without it should be at least the cross product of all supported installation types on the most used hardware platforms, each
- using the most popular integrations + hardware required to actually use the integrations
- and for the sake of it also including HACS and it‘s most popular custom integrations (it’s common to test unsupported but heavily used configurations in SQA). You may not support them but you definitely want to know if things break to prepare for the upcoming support efforts (people WILL come up with that and telling them „no“ is support effort as well, see this thread)
Designing proper automated and manual test cases and keeping them fast and stable to execute is also a big challenge whenever real hardware is involved.
Test effectiveness (by finding real world issues that will affect real world use cases if not fixed until release) would however drastically improve, if you had one person per configuration (by that I mean installation type on hardware platform on typcially mutually exclusive integrations [zigbee2mqtt vs. zha for example]) and have them use the development branch (beta versions in release phases) as daily driver. If you go for one person, that person should run at least one of these configurations as daily driver to uncover this class of bugs.
Trying to achieve that with the open source community only is very unlikely to happen.
The only realistic way I see is Nabu Casa hiring such person(s). If you decide to do so: Please hire people with experience. Most people not actually doing software testing thinks that’s something everyone can do. It‘s not. It requires the same amount of practical experience and the same level of background knowledge like any other profession.
If you need help finding someone and setting this up, please don‘t hesitate to ask.
It’s not mutually exclusive: you can do both
in-house testing and public testing. It is also very common for open source projects to release public betas. A company where I worked (and was part of this setup), we made nightly beta releases and had a public beta (opt-in) group that would voluntarily test it (just via normal usage). Those users thought it was quite fancy to have this privilege. We regardless did tons of in-house QA.
Also, excellent summary, @nohn.
General comments: What happened here was a rare occurrence for HA and it’s not like there weren’t options: users could roll back. The HA team was quick to respond. I’ve been following the GitHub issue. I don’t think it could’ve gone quicker, as there was a risk of making things even worse. Even the biggest of corporations with much larger test teams and rigs have made bigger screw-ups.
It’s certainly not nice to be affected by an issue like this, but just remember that it’s not intentional.
Some sensible comments above. I’ll make one final comment; I think the monthly release cycle where there seems to be an attempt to include new functionality every time is not conducive to recruiting beta testers. If we had instead, say, monthly security and bug fix releases (with very limited and carefully reviewed changes, no breaking changes, etc) and then bigger six monthly releases with all the new functionality with a longer testing cycle, I think that would encourage more beta testers. With such an approach I would be more likely to consider becoming a beta tester.
Some really valuable Comments here!
First things first: Thanks to everybody involved to quickly providing a Fix for the DB Issue! Without panic and making things worse.
I think everybody would agree, that the ultimative Goal is to frequently improve and Features, while never breaking anything.
Personally, beeing a User, i would always put the last first.
HA develops clearly towards becoming Mass-Compatible, which i think is the good and right decision (UI over YAML, simplification).
But IMHO it will never really reach the Masses (or even worse, they turn back from Homeassistant after their first “After Update nothing works”).
For a long Time i am not really happy with the Strategy to have HA informing me about an Update, where users can easily hit update, while it takes some effort to read changelogs / breaking changes, and then decide if it might/will/probably not affect the System.
It might be a valid decision to say “custom addons are not in scope”.
I wouldn’t agree, as it guess >90% HomeAssistants are using them, as they provide functionality not nativly included in HA.
But i think we all would benefit from first informing about possible breaking changes. And probably modified update/upgrade scopes (less and then bigger features adding, more fixes).
And if i could dream, i would just love having a “Update checker” component, going through all configs, addons, devices and checking against all changes. Most probably a really big Project and not easily doable.
Lets think it from the End: No more hitting update, and then “i can’t control my tv any longer, the card stopped working, the forecast is gone, cant make updates”.
As of Today, i can’t recommend my Mates’ Girlfriend to start with Homeassistant.
All Developers for Homeassistant, of course mainly from NabuCasa, do an awesome Job they can be proud of. And i really appreciate it!
Yet, the less breaking, the more Mass-Compatible it will become.
Just having something that filters for changes related to the integrations you run (or its dependencies) would already be tremendously helpful. I don’t mind reading. It’s the filtering that’s the hard, unnecessary part. GitHub issues are already labelled with the integration affected, so the underlying data exists.
EDIT: I know there’s a section in the release notes that’s grouped by integration, but that’s typically for breaking changes (aka backwards incompatible changes). I’m saying this for all changes (the full changelog).
I’ve had related comments in another channel recently. I don’t believe there needs to be monthly feature releases, or that there could be more of a balance between features, bug fixes and longer term stability.
That said, I’m not implying at all that the project is suffering. I think it’s being managed pretty well. It’s an enormous project, both in size and complexity. My comments are to improve things on all fronts.
Back on 2024.7.2 and fixed a few of my issues, but still can not figure out the chromecast issue. Is anyone else having issues with radio-browser casting to devices?
The onkyo integration does not have a unique ID so I can not get logs or do much troubleshooting from the UI. My HA is not that complex, and I have never had to dig very deep to find log info. Any recommendations where to start looking?
These discussions about update philosophy are good to have. And the fact that they’ve been staying (mostly) polite is great.
There’s always going to be a tension between developers and users. Developers want to run on the latest and greatest hardware. They can imagine all kinds of cool features and want to add them all. They love to tinker and tweak, so when things break it’s a fun challenge, not something that ruins their day. They like speaking a secret language of jargon which excludes outsiders. Some even like obfuscating code to make themselves seem smarter.
Users just want things to work, quietly and in the background, without a lot of effort, without having to learn a bunch of jargon, without having to buy new hardware and without having to research how every “bump” change really impacts them.
Each group needs to understand - and listen to - the other if HA is to grow beyond a niche market of tinkerers.