I imagine this to be horrible to support. New useres are often not reporting the versiom of HA or supervisor when they ask here for help, this will be even worse when you need the exact version for each module to get help. Also this is doomed to lead to compability/dependencies issues as integration x in version 5 may require library xy.py version 0.2 and integration y in version 6 requires library xy.py version 0.4. Also I donāt want to update/maintain 50 different integrations myself and to be honest, since version 0.6 or something there was never any breaking changes that took me more than 1-2 hours to fix.
Lookllke all linuxās OS.
or (more or less) 90% all internet server and IoT in the word.
so module XY depend on KKR version >=XYX is no (issue since 1980? Linux - Wikipedia)
Iām sorry I donāt understand how this solves the issue of having different integrations depending on different versions of the same library?
I already see how this ends:
A made up example: āI updated my sonos integration to version 0.5.1 and now my chromecast integration stopped workingā and the issue is the underlying library magicmodule for which sonos needs version 0.1 and chromecast version 0.2. What advice would you give such a person?
You also always have the option to run it in a venv and install and change every library or whatever.
Also, whatās the issue with the current approach?
The only benefit for smaller modules rather than a monolithic approach is download considerations.
I donāt know about you but my 56k modem ended up in a skip long ago.
Having a single code block is inherently easier to test and debug.
This is akin to talking at dinner parties about which back-roads you take to avoid a particular traffic junction. No one is interested because no one cares. Follow the sat-nav avoiding point X. Done.
Often smaller package/module is easier to maintain but maybe not here.
Many big company move to this approach when is complicated to maintain monolithic approach like google with android.
I see many advantage like updated more regularly like with HACS that works like charm.
I wrote that post because of that :
When a module do not works anymore we have to wait the next release to be fix.
Indeed I didnāt see the library issue and I understand it.
Sometimes we do not upgrade because of a regression or so many breaking change that we do not have time to fix them all so maybe smaller package could help to have more up-to-date module.
A solution could be having a ācommon dependencies fileā package and delivered by home assistant team and all module use it.
Module could also having a specific dependencies file like on HACS.
With that solution :
Home assistant team maintain only the core and specific module
I think in this case you depend on the availability and motivation of developers to maintain their own code. Generally nobody else have right to merge PR.
If you have an architecture update with the current design it easy for the HA dev to deploy on all integration. In a modular amproach. How many time before every modules you are using will be compatible?
Both have pros and cons, but Iām convince that current all in one approach is better.
Thanks but whatās the difference between modular and currently approach about maintainability ?
custom components : depends on external developers
internal components : depends on merge request of external developers
core system : depends on HA team
Everytime we depends on external developers to maintain components with monolithic or modular approach. When a components do not work and is not maintainable anymore, it is currently deleted.
āHow many timeā ? It depends but it can be one by one. If the HA core system have an internal system to āgit pullā only one folder (the components) itās not an issue.
Iām sorry, but thatās definitively not true.
The monolithic app like it is is quite vulnurable. A custom integration going nuts, an official integration which is buggy and boom, HA can (and ocassionally does) get unstable.
This is especially true for some of the legacy integrations which are not always good quality. (Homematic Classic, anybody?)
For custom integrations this may be true, but thereās always a warning for custom components that they may not work or make your system unstable, so itās your own fault if HA is unstable due to a custom integration. I mean you also canāt blame windows when your PC crashes due to a third party app you installed.
I never observed a buggy official component that comoletely broke Home Assistant. If there is one, you still have the ability to disable it and wait until it is fixed. You also need to be aware that lots of breaking changes are from manufacturers changing their API or other things that are out of hand of thr devs.
Question is about reliability. A third party component shouldnāt be able to break the core, but thatās very easily doable.
Also, the third party libraries used in some official plugins simply arenāt always updated as quickly as one would expect. (Talking about HomeAssistant, the hardware which causes exceptions, even though no crash, is on the market since 3 years without any API change. Itās just rare.)
Yes. I could sit down and fix it. It was easier to use MQTT instead and bridge it. Unfortunately, not everything is available via MQTT.
The concept of micro services (or, not stressing buzz words, monlithic apps vs. modular ones) has been around for quite a while and the advantages arenāt really debated in the dev community.
I understand this suggestion is a huge step and would likely be the biggest refactoring of HA. However, it would definitively boost reliability and - if done right - would decrease memory usage. (Letās not discuss download sizes - thatās silly, given the present size.)
Eh ?
Yes it is. (I also fully agree with @Burningstone (I ususally do, but when we donāt ā¦ ā¦ ) )
They assemble the whole of a vanilla installation into one ātestableā immage.
The image then undergoes a standard suite of tests (the tests get modified over time as the features mature).
But this is still largely tentative as it hasnāt been tested in real world situations where you have a modbus installation, he has some X10, I have Z-wave etc. Thatās why itās released as a Beta (Developers) release.
Everything internal to it, has been tested to co-exist.
It then existsis available for people to test (I have a test installation for such (but have only used it once like this to help Phil out) usually; I use it to test my differing approaches to configuration (just in case I break stuff) )
These Beta tests are concentrated (but not exclusive to) the main Add-onās available from the Add-on Store. People/Developers (or the āotherā Add-Onās) get an opportunity to test their apps against the new ācoreā (not refering to the installation option here). That Developer may be on holiday, work may have become stressful that week, He may be renovating his house (any number of other legitimate excuses AND they are volunteers too so ANY time they give is gratefully received) Also even if they are super diligent they may not be able to test against say X10 coz they donāt have any.
Then the Beta release is made an āofficialā release
Meanwhile āJohnā has made a custom add-on and it happens to rely on āXā component that is now deprecated - So that wonāt work out of the box.
So running as vanilla as possible, is the best way to have a stable system.
Say HA was supplied as say a core and 1 module, thatās 2 tests (one with one without)
2 modules is 4 tests, none added, one added, the other added, and both added
3 modules is 8 tests
4 modules is (this is where my counting skills get fuzzy and Iām not that invested in the answer ā¦ but I bet itās at least 16) (using 2^(No. of Modules), means 10 modules requires 512 tests And 20 modules is 524,288 tests, how many modules would you think you need ? )
I think HAās possible resources are stretched pretty thin as it is, so doing this is just counterproductive.
And besides, do Microsoft test each release of (say) Norton Antivirus ?
No, Norton Do
And if it doesnāt work with Windows ā¦ ?
Itās Nortonās fault
I laughed long and hard at this. This references my comment that was used as a possible justification for the modular approach (which I donāt support), it was meant to be sarcastic, maybe I need to wrap such things in a āsarcasmā wrapper ???
I understand every point of view.
Whatās i only need is more isolation between component.
In the source code, HA is already modular, only the packaging is not ( really) modular.
I do not want to move every component to external github and custom_component.
A system to only partially update 1 or 2 components could be enought for me.
When a really tells me āyou have 24 breakings changesā, i will have to wait 4 or 5 hours of free time to upgrade ( itās really rare ).
If a partial update is possible, i could fix component by component.
Itās been under consideration for quite some time. It requires a good deal of planning and caution to do without constantly breaking things but is certainly possible.
In a perfect world yes, but are you going to tell me that there are no 3rd party apps for android/ios/windows/mac os/linux (insert any other OS) that can make your system crash??
Come on, please leave memory usage out of discussion completely, weāre talking about a few 100 MBytes, thatās completely irrelevant.
I donāt really understand, do you mean you have rarely 4-5 hours of free time to fix or thereās rarely updates that have so many breaking changes?
Iām using HA for I think around 3 years and I can not remember an update that took me 4-5 hours to fix breaking changes.
And with your suggestion, how would you do release notes? And would you expect people to update each component individually? I mean I use maybe 50+ different integrations, I donāt want to maintain each of them individually.
The dependency management and updating of multiple integrations has solutions that already exist. The linux example was relevant and python dependencies can be handled very similarly, see pip. Ideally, you would have an option to update all integrations, freeze certain ones to a specific version, or pick and choose which to update when. This has all already been solved elsewhere.
The real problem, I think, is the amount of effort it requires to do and maintain this for Home Assistant compared to the advantages/disadvantages that come along with it. So far, I donāt think thereās been a strong enough motivator to justify getting it done.
Comparision is difficult, but lets put it the other way round: Due to some choices done before, shielding and protection of HA components from each other are actually the equivalent of a teenager hacking POKEs into his C-64 back in 1985 and hereās why:
Monolithing applications share a single memory space. In any halfwhat modern system (beyond microcontrollers), no process may just (willingly or unwillingly) access (let alone modify) the memory space of another process. Yes, thereās semaphores, and yes, the concept is not 100% (but mostly) bullet proof up to the point itās actually implemented in hardware these days.
Python makes this particularily worse, because Python (by design - this is not Python bashing) doesnāt implement concepts to shield modules from each other. So a bug (or a cowboy developer) can very easily trash structures.
As for perfect world - that is a standard principle in system design since āquite a whileā. (Look up the infamous Sendmail vs QMail debateā¦ Or Torvalds vs Tannenbaum if youāve got popcorn around).