So, I’m just starting the planning process for the next iteration for my Home Assistant server. At the moment, I’m using a Raspberry Pi 4, with 8GB of RAM, with the data directory for HA (and other services) running on an SSD. (The OS, Ubuntu, is still on an sd card.)
I currently use a container installation and docker compose HA with Node Red and a couple of others. I started with the container approach because I was also teaching myself about docker at the time, years ago, when I was first installing HA, and I wasn’t sure if this experiment with HA would continue.
However, I know there are also limitations and trade offs in using docker containers. I’ve not run into too many walls (HACS is as good as add-ons, for what I need), but I’m sure I’m missing something. HAOS is strongly recommended when trying local voice assistants or Motion/Thread, for example, so there must be reasons.
So, why should I use HAOS in my next server? What reasons might I be missing?
You are missing very little. A container install is a solid install choice if you are familiar with Docker.
Pretty much all you are missing compared to HAOS is managed OS and container (add-on) updates. Yet you gain quite a bit of flexibility with a container install.
That is a very confusing statement. HACS supplies integrations and frontend resources, not add-ons. Add-ons are synonymous with additional containerised applications, e.g. Node Red. If you can manage your own containers you are not missing anything by not having access to add-ons.
It’s mostly because I don’t understand add-ons, given that I don’t use them. So add-ons are equivalent to the other bits I’m mixing into my docker compose. Got it!
Add ons actually are docker containers, the difference is that they are curated and updated from GUI.
I started with RPi and learning docker; then changed to HAOS. I think that was the right journey, obviously.
I think there are two main reasons to choose HAOS over container.
Simpler upgrades for add-ons. Having HA prompt you to upgrade with one click is simpler than managing it yourself (although Watchtower can help). I still have MQTT and Node Red running outside of HA (because 5 years ago I thought HA might be unstable), but it’s just an extra pain and brings no benefit I can see.
It’s the most common installation. If you post problems about HAOS, there’s more chance of help and/or somebody that’s had the same problem.
Possibly the biggest negative is HAOS “takes over” the whole machine, which is why I run it in a VM. Then I can choose to use HA add-ons or a docker container in another VM if I need extra flexibility (which I haven’t needed yet).
I wonder if the real question is “why would you install Container rather than HAOS?”.
Agree. From the getgo I chose HAOS in a vm and it was the right choice for the reason you mention. Also, a vm gives you unmatched recovery capabilities.
Literally the only things you are missing are the add-ons and the complete system backup and restore options.
The first you don’t need if you know how to use Docker.
the second is also no problem if you just save everything in your config directory to a safe remote place away from the HA machine.
HAOS is the most recommended install method because the developers don’t necessarily trust the users to be able to manage their own environments and the HAOS install takes all that out of the hands of the user. And since most users aren’t very tech-savvy now they are happy to let the OS run things in the background.
I’ve been running a HA Container install since I went away from a venv install of Core 5 years ago and I don’t feel I’ve missed a thing. I even ran a Supervised install as a test alongside my Container install for about a year and I saw no advantage to it so I got rid of it.
I now run three installs of HA Container simultaneously. Two of which are test platforms for development. I don’t think that would be even a remote possibility with a HAOS install without spinning up a new VM for each of them.
I too migrated from a venv install about 5 years ago. Got my hands on an x86 i7 server that I now run many containers on. I have never missed the add-ons as everything I have needed I have found other containers for. I was running watch tower and it automatically would update my containers. Today I run what’s up docker to do the same. I never have to worry about updating my system as this happens automatically. I get a notification when it updates and I peruse the change list so I have a heads up on any breaking changes. I am very happy with this approach and will likely stay this way for a while. I have found the x86 platform to be much faster than my original rpi install.
For me, the main reason is that the container needs to be run as root with privileges (at least, it was still like that the last time I checked). Otherwise, I’ve seen people struggle with things like shares and accessing and setting up paths to peripherals, but part of that depends on your level of skill. I still run core and I’ve been pretty happy with my choice: a Python and OS upgrade not more than once a year. It’s definitely not for everyone though.
I develop some integrations and add-ons and therefore would be considered advanced or expert, and usually that sounds like I’d want to run some kind of heavy touch setup or whatever. But the reality is when it comes to the HA system me (and my family) are using for daily driving the house I absolutely do not want to mess around with maintaining anything about it so I use HAOS in a KVM VM. I just want it to work like an end user would want. That includes installing/updating my own integrations and add-ons through HACS or the add-on store.
Even just last week I started to get into water meter reading with an SDR and found two integrations that sort of do what I want but each is missing something I like that the other has. So I’m probably going to sit down and make a combo version that has the things I like best about each add-on, then publish that add on and manage it through the HA UI. Even though that sounds like a lot of work that could be avoided by just running something on a general Debian VM (that I already have for other things) I really prefer how reliable, self-contained, and easy to manage HAOS is.
I was running HA on RPi with docker.
I changed to HAOS because that is the most supported version, but on HyperV on Windows 11 on a NUC. I recently changed to run HAOS directly on the NUC because HyperV doesn’t expose USB ports, but also because as an IT professional, my experience is that you always get the best outcome from using the defaults that the supplier tested the most.
With HA you have the choice, but you can also change your mind. On my journey I moved platform twice but kept all the config using backup/restore. No-one else in the house knows that I reimplemented.
I run 2 instances of HA Container on DietiPi (a lightweight Debian package).
I use docker compose so updating HA is a breeze.
DietPi makes updating Debian also easy.
I don’t miss the add-on library (I have the ones I want in docker containers).
I use HA’s built-in backup service and have a Duplicati container to copy that off-site.
Anecdotally, I feel I see far more HAOS “I upgraded and now nothing works” type posts which makes me wary of HAOS. This may be confirmation-bias though.
I can’t think of any reason why you shouldn’t use container install. I never used haos but was using supervised install. I think that container install is far superior in terms of scalability and configuration than any other. You can just fine tune everything you want.
For me turning point was when my server died due to system overload.
That was it for me. Back then I was using i3. I switched to i7, packed it with as much ram I could stuff into the box, bought some decent gpu and google coral. Now system is running with average cpu usage between 10 - 11 % and I’m running 26 containers.
This is awesome info! Thank you, everyone! I have a much better understanding of the differences now.
To be honest, containers – or, more specifically, the syntax of the docker compose yaml – can get hairy sometimes. It would be kinda nice for something else to handle that. Although I touch it fairly rarely, that’s a nice plus.
At the moment, I have Home Assistant, Node Red, ring-mqtt, zwavejs2mqtt and openwakeword (experimenting with voice assistant) in there, and I’ve asked Father Christmas/Santa for a couple of Matter/Thread devices, so I’ll probably be delving in there yet again. I keep them updated with a script that I run (manually) whenever HA updates, removing the containers and pulling again. That’s frequent enough that I’m rarely far behind latest on the others.
But I’ve also moved many things off of this particular Pi, to other servers, such as MQTT and my own backup system. This Pi has become (over the last three years) a dedicated HA+add-ons box, anyway. So I could leave management of the OS to HA, as well. It’s worth considering.
I’m interested in your script as I don’t remove any containers until after the new one is up so the HA container downtime is ~10 secs and HA restarts in ~1 min.
My update sequence is: docker compose pull docker compose up -d docker image prune -a docker container prune