WTH HA is maintained as monolith?

In particular, why Integrations are integrated with the core, being dependent on release cycle.

It leads to following:

  1. major releases may break integrations.
  2. changes to integrations (incl fixes to bugs mentioned in #1) have to wait to be merged to core. Such a process is time consuming process considering number of changes to be processed at once
  3. if designed smart, update of single integration wouldn’t lead to a whole HA restart
  4. such integrations cannot be removed - the system must handle all of them even if unused. Or annoys users which just don’t want them to interact with theirs environments

In short, maintaining integrations separately would provide lot of flexibility for users as well as developers

Why is there integrations which is not necessary for core and initial system setup a part of core?
Would it not make HA smaller to download (initial downloads and core updates). Considering a HA setup for my boat so a slim-down-core with then only needed (added) integrations would be excellent. Would make updates smaller over G3/G4/…

3 Likes

Good one. I submitted a Feature Request last year saying basically the same thing.

Please up-vote here and the FR below if you agree!

1 Like

While I may like the idea at first glance I think if the integrations were uncoupled from the core release then I think you would have way more broken integrations since they wouldn’t be developed coincidentally or in a coordinated way.

At least now we have a development TEAM that has at least some top down coordination vs a bunch of individuals trying to keep track of the changes needed when core changes.

How many custom integrations break after a release that the custom developer needs to go into damage control mode to fix things because they didn’t know about certain changes?

Not to mention the lack of documentation control that would be needed if everything was considered “custom”.

I think the ultimate issue is that there is possibly too tight of control over aspects of integration updates/repairs and none of the core developers want to/have the time to approve the updates.

Fixing the underlying process of updates would be time better spent than uncoupling the integrations from core.

12 Likes

I understand you worries but making the system modular must not be equal to getting rid of QA pipelines.

It means that modules must not be merged to the monolith code in order to release and may (but don’t have to) be deployed separately.
It’s more about architectural change which opens more possibilities.

I also suppose this change would lead to establishing more strict and defined API. Actually it’s mandatory move for modularity.

I’m not saying that it all should turn into HACS. Rather into something like official add-on store (with exception those modules must not be deployed as docker images)

Anyway I can see that the current approach is starting to be bottleneck maintenance wise. At the end it’s not me who invented modularity and decoupling. Obviously they exist for a reason.

5 Likes

Thinking about this from the other angle: what are the benefits of the current architecture?
I think you’re conflating two separate concepts.

A monolith is a large collection of software which is built, tested, run and distributed as a single unit, accomplishing many separate tasks.

A monorepo is a large git repository which contains code for many separate units, which may or may not have some level of integration with each other.

Home assistant is (kind of) both. I don’t actually understand the details of the build/deployment process with respect to integrations. I’m assuming that if I’m not using the integration for some random IoT device, that it doesn’t bother loading the pypi packages necessary to run it, and that the code for that integration doesn’t get loaded into memory or cause much in the way of performance problems. But this is a minimally-supported assumption that may well be wrong; it’s been a long time since I’ve done serious python development.

  • I see it as helpful to have an easy-to-access, enumerated “universe” of “all the integrations that need to not break” when a change is made. This is especially valuable when removing deprecated code/features, which is important to keeping development fast and painless, so that you can validate that the deprecated feature is no longer used, and make updates if necessary.

  • I also find the “curation” provided by the home assistant core developers to be one of the strongest forces promoting high code quality in this open source project. It is really quite remarkable how standardized they have managed to make this codebase, considering the diversity of contributors, which makes it much easier to read.

Could both of those things be accomplished with a different distribution strategy? Sure. Would that significantly help with performance issues? Perhaps, depending on details I don’t know.

But this is far from a slam-dunk.

3 Likes

Yes, it can work. That’s how my cell phone (and everyone else’s on the planet) works. I get prompted to update the OS when a new version is available. I get prompted, or can choose to automatically, update apps when they’re ready. Apps have dependencies on the resources exposed by the OS (APIs, etc.) Just as in HA, the OS developers are not necessarily the App developers, but they all have to use the same framework, and follow established procedures.

The goal isn’t to improve performance. The goal is to allow change while limiting disruption. It would allow me to update the OS even if the new version of an integration I rely on has a bug. I’ve been burned by that before. Now I hold off updating to a new version for a few weeks until most of the bugs have been identified and fixed.

I’d much rather keep everything up to date, and only regress those integrations which caused a problem.

1 Like

It sounds like I misunderstood your main point. You aren’t concerned about what gets loaded at runtime, you want the ability to self-manage which versions of which integrations you’re running, independent of the version of the core. Is that basically what you’re saying?

I don’t expect any impact on performance but integrations (some at least) load with the system to be able to scan for newly available devices in the network.

Yes. The OP lists four good points in first post. My interest is in point #1. I’ve had to keep HA (and all the other integrations) at an older version because there were flaws in one integration which was updated with the current version. That one integration happened to be critical for my implementation of HA.

Other times, an integration may have breaking changes. In HA jargon, breaking changes are not bugs but intentional changes which will require the user to make some change to their configuration for the integration to continue working. Again, this may force us to remain at an older version of HA, and all other integrations, until we have time to test and implement the required changes for that one affected integration.

Double that! Still sitting at 2022.6.7 and waiting for an important integration to fix few things. On the other hand looking at all those new features… stability or new great features, that’s unfortunately a typical HA trade-off (meanwhile) in my experience.

Man… catching up with those updates will cost me a weekend… the only reason to look forward to bad weather or winter: time for HA updates :slight_smile:

1 Like

My take on a lot of this - There would be a number of benefits for having a way to implement out-of-process device integrations. Many/most of those have been mentioned, decoupling core releases from updates to devices, long term maintainability, etc. Others:

  • it may make it easier to implement a new device integration and require less involvement from HA core developers.
  • Right now an integration is either considered an approved part of home assistant or a suspect custom integration that will possibly cause problems. If a custom integration is out-of-process, there are less chances that a misbehaving device integration could break home assistant.

There are also some downsides:

  • it would require a mature definition and implementation of the interface between HA core and devices.
  • Once that interface was established, it might slow down innovation.
  • potential latency
  • Increase coordination overhead between different code bases.
  • Vendors with closed APIs/protocols would be able to offer a Home Assistant integration without opening their APIs/protocols which would go against the stated goals of the open home.

Today there is a class of HA users who cobble together stuff that resembles a device integration using MQTT and a bunch of configuration/templating. It mostly works, but it is difficult to share/support, but those things can be prototyped and developed at their own pace without waiting for code reviews by HA core developers.

So I hope some sort of way of decoupling some (but not necessarily all) device integrations winds up on the road map. These might be considered 2nd class integrations to those in core, but I think in general it will be worth it. I think it may be necessary for continued growth and health.

An open source project that went through something similar to this is Ansible. At some point “batteries included” gets really unwieldy and onerous to maintain.

Interesting idea. There are probably some things which justify core integrations. Like for anyone with a RPi, the ability to use the GPIO pins is kinda fundamental. But that was pulled out of the core. Go figure!

I’d say anything which uses a third-party cloud or hardware should be a “2nd class” integration, not bundled with the core and on an independent update cycle.

I was thinking more along the lines of that first class integrations are commonly used, have active maintainers, and are considered tested/supported with each major release of HA core. In other words, these are the marquee devices that Home Assistant lists as supported out of the box.

Although at high level this sounds like a nice idea it is a fundamental question about the architecture of Home Assistant. At the moment there are over 1700 integrations available out of the box and even just a simple change to start maintaining them separately would require huge effort to implement.

Ideally with such idea you would want to change the architecture from monolith to micro-services (or some hybrid one) and yet again switching from today’s architecture to it would be extremally costly.

I’m afraid we have to live with it for a longer while, I’m pretty sure it is far from any priorities for the HA team…

1 Like

You can do that step by step, integration by integration.

Let’s wait next 5 or 10 years. It makes architecture change easier :wink:

Yes, refactoring is painful. But this is a cost of decisions toward “simpler development at start” made in the past. At some point without refactoring, every project starts to ‘eat its tail’. Not sure if it’s not the case already today seeing how many resources are involved in fixing issues.

2 Likes

The key question is: what is the right architecture?. If you look at the way it is being used/deployed (e.g. RPi devices) I’m not 100% sure to just arbitrary say that microservice is the right architecture. It actually comes at a cost as well and is not always just better than the monolith. Some more deep evaluation would be required :slight_smile:

2 Likes

I’m not sure I understand. I just added an integration I hadn’t known about previously. So imagine whichever developer works on that integration makes a change tomorrow. What would they have to do differently if it wasn’t linked to the core HA update cycle? It would seem like it would be less work, not having to coordinate their change with all the others.

1 Like

It would be a rigid api. Any changes would cause ALL integrations to update. So this whole “Independent don’t need to update” stuff is just a pipe dream. Sure you might be able to get to that point if you limit the API, but then you lose out on functionality. TBH this whole WTH is a pipe dream and I doubt it will ever happen. You can essentially do the same thing with custom_components right now, or even built in ones (by making them custom). This would just be a waste of time and resources to get almost no net gain, and mostly drawbacks.

1 Like

Nobody said, there must be only one API version available. If needed new API inherits from old one modifying it at the same time. And after time old API versions might be obsoleted

BTW stability of a contract is the main point of creating API (interface between subsystems). If someone considers API changing every release then it doesn’t make sense to think about API seriously.
But we know the API is relatively not changing. Otherwise, all custom components (and most from 1300 core integrations) would fail every release.

btw someone did mention micro-services. it must not be the way. it might be dynamically (in-runtime) loaded modules. api must not be http endpoint. it might be certain set of methods available through interfacing class