System Architecture Advice

Now my HA system gets more and more complete I learned that a raspberry pi is excellent as a start, but that I should upgrade my hardware.

Ideally I would like to upgrade in a future proof manner and rather take a few additional hassles on-board that making the move to rapidly and have to rebuild the system again in the future.

As there are many possibilities to configure HA I would appriciate some advise on the best configuration / architecture from some experts / a reference to a good read on this subject.

This quest started out by wondering about my performance degradation and bumping into this best hardware list for home assistant.

Reading through the latter article I realized I had an old Surface 2 Pro laying around on which I could install Ubuntu Linux provinding me the option to have an display to directly interact with the system if I would ever need it at all.

I could of course just do a quick read on how to install HA on linux and off we go, but so many questions have arised that a little help would be appriciated.

  1. Should I use Docker(s) or should I just do a plain install?
  2. Is it wise to distribute different system components on different hardware? Is there a need performancewise at a certain point? or is a more advanced PC capable of doing the job of a more advanced HA system? if not, when do you run into limitations?
  3. Should I upgrade from the default database and if so which option should I choose?
  4. Should I start to dig into which entities to exclude or is it no longer an issue when I upgrade my hardware?
  5. Should I dig into node-red or is the standard automations environment good enough? where are the limitations other than that node-red makes them graphical?
  6. Is there a good read on how to leave for those who leave the newbe status and get to the next level?
  7. What is all out there that I did not read and really should since we don’t know what we don’t know?

Use docker, 100%, so much easier to manage, upgrade, backup, and revert.

If you have a fast enough system, you do not need to distribute. HOWEVER, if there is a component which requires regular hardware maintenance or upgrades or a lot of reboots, that can impact everything else, you would need to do an analysis for your specific use case.

I have seen no need to change the db for performance reasons (db at half gig), even though I have MySQL running on the server I have not migrated, however I may do it simply for backup reasons.

Some entities record data in very small increments, and display on the browser becomes the performance limit, I have had a new 8-core laptop choke displaying the history graph of certain items, the server on the other hand was practically idle.

Node-red offers tangible advantages when some complex automations become unmaintainable or simply impossible, I am not using it because I did get my complex automations to function in HA eventually, but I was looking at it for a while.

The “best hardware” is NOT a NUC, it is something with an order of magnitude better reliability and scalability, a dedicated server with server grade hardware and power regulation. I am using a storage server I built 6 years ago that was way overpowered for that use, but I wanted room to upgrade its role over time, and now it is. It is old, but still runs HA like its nothing, detailed here, other than upgrades running 24hrs a day for 2200 days.

If you want low cost, place anywhere, does the job for HA… get a NUC. If you want more, get/build a server, in the long run the benefits outweigh the costs, if you do it right and make use of what its capabilities.

1 Like

Here are my thoughts, though I cant answer all of your questions:

  1. going docker route provide you with supervised installation, that is easier to maintain… but perhaps less flexible, I think…
  2. sometimes keeping all systems’ components closer to bare metal HW helps. Good example is Deconz that runs way more stable, if installed directly on Ubuntu with direct access to Conbee stick. I use HA as VM on ESX that causes several layers of virtualization that need to be traversed befor these components can talk to each other and that was not stable. After moving Deconz to separate Umubtu machine it finally started to work (not perfect, but waaaay more stable). So depending on componets you use it might make sense to distribute to different HW, but might not be required.
  3. going for separate DB (I use MariaDB on Synology NAS) allows to keep history separated from HA installation, so if anything crashes and you need to reinstal or restore HA, you do not loose your history. So depend on how much you are attached to one. If you decide to migrate from RPi/SD to ‘proper’ hardware, all disadvantages of frequent writed to SD (wear risk) or perfomance (slow SD) goes away. Additional advantage of separation - it is not included in snapshots, so these are performed quicker and take way less space.
  4. Always good idea to keep in controll what you write to history DB. For some time I let my system write everything and kept 30 days of hisotry, which caused my MariaDB to grow to 50+GB and it stopped to be maintanable. Every daily purge was virtually slowing my HA for several hours to level of barely being usable. Afterwards I started with fresh DC, I limited what is recorded to only these entities that Use in any history graphs and limited history to 14 days. Now DB is ~3GB in size and absolutely no performance issues.

My experience after using HA for around 4 years:

1.) This is a good overview of the install methods and respective skill level etc.
I would not suggest using Home Assistant Core (in a virtual env) unless you really know your linux and virtual environments, python etc. Personally I use Home Assistant Container as it gives me the flexibility to run my “own” docker containers for which no add-on exists or the add-on is not up-to-date. In addition I can install my own stuff on the host system, which is not possible/limited with Home Assistant OS.

2.) Depends on what you are doing, NVR or AI stuff may sometimes be better to be on separate hardware (like a Google Coral stick for AI).
I put my Z-Wave and ZigBee stick on a separate Pi, because the NUC running HA etc. is inside a rack where the range is limited.
3.) Dependa on what you want to do with the data. I use Postgres and keep data there for 7 days for the logbook and history and for long-term data I use a separate InfluxDB and Grafana to make some nice graphs etc.
4.) This you should do anyway, even if your hardware is powerful enough. Why would you want to keep history of the sun elevation or the time?
5.) The native automations are enough, I haven’t found anything that’s not possible with native automations yet. And in my opinion Node-Red is just an unnecessary additional layer, however if you are a graphical guy Node-Red is a good fit.
6.) What’s next level for you? What do you want to know? In general I just read up on individual topics that I’m interested in, then I start to implememt it in my system and then read more and more.
7.) I like to take a look at github repos from other experienced people to get new ideas for automations, hardware and software to use.

1 Like

Thanks for sharing your insights guys. It helps to get my head around what would be a good next step.