Are there too many new features being introduce, too quickly?

I am going to say, no.

HOWEVER… HA has many classes of users, using many devices, some of which are an order of magnitude from eachother in performance, using a vast and varied combination of integrations, and their experiences are bound to differ. Should something be done to make this easier on less technical users, or those running HAOS on low end hardware, where updates are time consuming?

I am going to say, yes.

The rate of development is high, and I do not think that should change, but eventually it will slow down as platforms mature… maybe… there will always be new platforms, new integrations, new devices, companies change their API with no warning, and so on.

I am not sure if the HAOS users would agree or not, but my suggestion would be to stagger major new features so they only arrive every other month, as a “preview” release, then have the next month release be the “stable” release. This could potentially reduce the burden for system updates, though testers and more technical users would easily be able to install every update if they wished.

There are obvious drawbacks to that type of release cadence, the main one is that the stable release will have a BIG changelog, and more breaking changes than the previous. It also would not directly fix issues like the Honeywell one, and it may very well prevent certain rare issues from coming to light until there are a large quantity of people installing the latest version.

I am actually quite curious what the opinions of both the dev team and the user base would be regarding an every other month stable cadence.

I did not find it to be negative, the forum topics for new versions are full of posts about things going wrong, and in some cases it is due to a dependency change required to add a new feature, or the company updated the firmware on someones device the same day, or whatever, and of course HA will take the blame until it is otherwise placed. The smarthome ecosystem is full of products aimed at non technical users to simplify or improve their lives (or at least they purport to…), so many smarthome users will simply not grasp the scale of work that needs to be done in a project like HA to support the things it does for a global userbase, even highly technical ones, unless they start looking at the code and its rate of change.

In the last year I have never had a DB issue, and at 7GB I do not think mine is “out of control” but it is more than double what it used to be after adding just a few sensors with high frequency update intervals, though I am fully aware of many users that needed to delete the DB, so I agree that something should be done, such as splitting the long term data into its own file, though corruption issues at the filesystem level will still be an issue, especially for people running on low end storage or SD cards.

It might not be something that is easy to fix without fixes at the dependency level that are out of the HA dev’s hands.

I moved to MySQL in 2017 after experiencing extreme performance issues with the database on SQLite. Which I found odd at the time because SQLite is very capable and quite often for single connection access more efficient than MySQL. But then, I looked at the database structure and immediately found the issue. (below)

It’s not so much about having a DB issue, though the possibility of DB issues increase as the size does.

The issue is that a relational DB engine is treated as a flat file.

For every state change, the states table gets a row added with the state, the full varchar name of the entity, a timestamp, and a big varchar lump of JSON containing all entity attributes.

It’s an abuse of varchar and databases.

Entity names should be broken out to a table and linked to the state change record.
Attributes should be their own table, again linked.

It’s the basics of relational DB design.

It’s also the difference of using 8GB to store 80MB worth of data. :slight_smile:

2 Likes

I am sure the developers would welcome a PR that does things properly, preferably without breaking existing users. :wink:

1 Like

I’d be happy to design the database but I basically have no Python skills so the I don’t think I’d be much help there. At some point though, someone is going to have to fix it if Home Assistant is ever going to be “set it and forget it” reliable and performant.

1 Like

This thread keeps pushing my HA hot buttons. I really am a big fan of HA, but…

Yeah, the DB implementation is a set-up for failure. The DB design sucks, and if SQLight is so awful, why is it the default? And since it is the default, why is EVERY SINGLE state change and event logged automatically? And if the user has to go in and exclude some of these, why isn’t there a way to do that from the GUI? And if every user has to learn SQL and learn how to analyze the database to find what to exclude, why isn’t that in the installation instructions?

Honestly, sometimes I think these decisions are a deliberate attempt to keep the uninitiated from using HA. You’d think developers would be proud of this fantastic project, and want mere mortals to enjoy it.

1 Like

Based on some of the comments, it appears people forget that the software is built by volunteers. If no one is interested in fixing or enhancing something, it doesn’t get done. It’s as simple as that.

Like it or not, the same is true for the paid employees of Nabu Casa. What they fix or create next depends on what interests them.

Feel free to submit PRs or Feature Requests because complaining about what developers do or don’t do is not just unproductive, it can be counter-productive.

6 Likes

At least here there are some paid employees who work to keep things moving forward.

openHAB is strictly volunteer developers and is a REAL mess even though features are not introduced as quickly.

I have to say that while there maybe improvements that can be made, and no doubt lots of them, this is still a largely volunteer project. Priorities set by those who contribute content will prevail, my original question was whether the introduction of new features needed to be slowed down so they could be properly tested before intro. So I think we are getting a little off topic when discussing basic architectural decisions.

They are tested with the betas, however people using the beta are a minority, therefore the tests are limited to the small set of hardware/software combinations. The possible combinations of hardware/software are almost endless, if they would wait with introducing new features until it has been tested on every possible hardware/software combination, there would never be any new features.

This will only change if more people start using the beta, but the tendency is more towards people waiting for patch releases until everything has been fixed, some pepole here even suggest to users to not update to .0 releases, so you have a vicious cycle.

1 Like

I don’t recall saying “tested on every possible hardware/software combination”, but its clear from the number of minor releases each month that the software is not getting tested as much as it should. I recognise there is a tradeoff between testing and speed of deployment, its not a black and white decision. Not sure how much time there is between a beta release and a full release, maybe this needs to be longer or more need to be encouraged to use and report betas.

So join the beta team volunteers and help out.

As has already been explained, more than once, it is not a matter of time. By the end of the beta pretty much all the testers are happy with their systems.

3 Likes

As tom stated, they only release it when the beta testers are satisfied and don’t have any issues anymore.

So the only option is

however, good luck with that. The majority of the users just want the system to run and don’t want to fiddle around, report issues etc. If they don’t even update to a .0 release, do you really think they’ll use betas?

I agree that getting more beta testers is a problem. I’d love to help out, but like most people my HA system is “production.” Any down time results in lost data. And like most, I use a variety of hardware and integrations which would be very expensive to duplicate to create an independent “test” environment.

I try to stay up to date, even with .0 releases. But if just one key integration fails, I’m forced to restore to a working version.

Maybe one solution would be to make HA more modular. One integration I depend on received a re-write in version 8.0, and has experienced at least three, maybe four, critical bugs since then. As of 9.7, it’s still not fixed. No-one who depends on that integration can upgrade beyond 7.4 because that integration is part of the core. If I could update just one component at a time, I’d be able to test a lot more, be it a beta or a production release.

I will say that there is apparently a way to replace a core component with a custom version, but it’s not well documented, requires a bit of a learning curve and isn’t likely to be done by the average user.

2 Likes

It might also help with beta testing if the developers gave a heads up about some of the larger changes and asked for specific testing for those changes. For example, they could post saying “Hey we’re making some big changes to integration XYZ and could use some more thorough testing”. If I happened to use integration XYZ, I’d be much more inclined to do some beta testing for that particular update rather than just constantly running the beta all the time.

Obviously this would only cover some updates, but I think it might help to alleviate some of the problems when a larger change occurs or a specific integration is reworked.

5 Likes

I believe people are not understanding how truely complex HA really is. Integrations, add ons, HACS etc. It is not possible to test everything. I believe the developers truly try to capture breaking changes but and it is a big but, things are missed all the time due to vendor changes, api changes library changes, etc. The best way to make sure your “custom system” works is become a beta tester. The team is very responsive to the beta forum on discord.

Another way is to not use cloud based devices. I understand that, for example, it is very unwelcome for Honeywell users to have a broken integration. But it is cloud based and if you lie down with dogs you get up with fleas.

2 Likes

I did that couple of times by asking on pretty active threads here for changes to Features or even complete rewrites of integrations with ~400 active users. I get that this isn’t many but there is next to 0 interest in testing - may it be due to the complexity of setting up a test environment or that only a few even use that specific feature or even own hardware needed to use it.
On the other hand if one of these changes doesn’t work fully as expected it always can be fixed in a dot-release. That’s what they are there for. There are lots of such small integrations that will never find a beta-testing circle - but this doesn’t even effect the majority of HA users in any way.
Having the beta phase extended from the current 1 week to whatever timespan wouldn’t change anything about that.

Implementing the ability to only update specific integrations would require them to be compatible to earlier core versions - which with the current development pace of core - is infeasible for volunteer developers.

3 Likes

I like the flea analogy, and tend to agree about cloud based services.

But in this case, it wasn’t Honeywell who broke the integration. The integration update introduced all the problems. The old version of the integration still works with Honeywell’s cloud API exactly as it did before.

So, while the advice to avoid cloud-dependent devices is sound, it’s not relevant to this conversation. To be honest, with something as critical as heating systems, in my climate, its reassuring to have an alternative way to access my thermostats if HA should fail. And frankly, Honeywell’s cloud service has proven far more reliable than HA. That’s sort of what this thread is all about.

I have that, The physical controls work fine on my programmable “dumb” thermostat :wink:

2 Likes

Agreed, I was just reading about some of the newest “connected” systems that basically have their own wired network ( thermostat, heat pump, air handler) in addition to cloud access. The wired network can sometimes fail and the system is either stuck on or off until there is manual intervention to establish that network again.

1 Like