The things I can think of is that you keep your HA VM clean with only the services needed for HA. That way on a reboot of your VM, or restart of your HA core all other services, like for example, MariaDB, Influx, NodeRED etc. will not be impacted and keep running in their own LXC container. And it’s easier to back-up and restore seperate add-ons/modules. For example if you have been playing around with some NodeRED flows and fucked it up, you can just restore your NodeRED LXC container and the rest of HA will not be impacted.
I have all my HA stuff in LXCs, with the exception of Deconz - that is in a KVM for the USB passthrough (it would work with a LXC but a load of hassle I found).
HA in python Virtenv, Node Red, mariadb, home bridge, nginx, pihole and zone minder all in LXC. I have deconz in a KVM and a few things in docker on a Ubuntu KVM.
The benefit I find is that although there are more moving parts, one system doesn’t break everything nor does an upgrade spell disaster. LXC backups are fast and restoring is fast too.
I find the NUC runs a lot better and uses a lot less resources than putting it all in docker or a KVM. My ‘prod’ LXC HA ticks over on about 500-700MB whereas by Dev KVM HA uses 2GB RAM almost constantly for example.
Everyone has different use cases though and different levels of knowledge. I implemented and manage two now six node enterprise Proxmox clusters for work based over 1000 miles apart containing many LXC, KVMs and docker containers, so I am quite comfortable doing this.
Each to their own though and what i love about having options is that we can all choose what is best for our scenario.
I have installed HA in a LXC using the virtenv method. It is basically the same way as installing it on bare metal in python.
The add ons in the HassOS/Supervised or whatever it is called (I get so confused lol) are not available, but you can add HACS. I wouldn’t recommend running Supervisor in a LXC (though in theory it is possible).
I do though think we get too hung up here on ‘supported’ and ‘unsupported’. The bottom line is all support is voluntary anyway and HA isn’t a commercial product. It’s not like you invalidate your warranty
The beauty I find with a LXC is that you save normally about 1-2GB of memory as you don’t have the KVM layer and the full OS to contend with. They function as ubuntu or whatever flavour of Linux you choose. As I said not for everyone, and there are a few ‘gotchas’ but in the main I prefer running them as I have more control over the install and behaviour. Reboot times are literally seconds to get to the OS part and you have full control over the machine rather than it being ‘supervised’. Some prefer that, others may not. As I always say, there is no right or wrong way and it is personal experience and use case that determines which way we go.
I have not looked into Motion Eye, but I am sure it can run in a LXC if it runs on normal Linux.
Do you run those packages (NodeRED, MariaDB etc. ) in Docker in your LXC containers? Or do you create a LXC container, a Debian 10 for example, and install the packages directly in the LXC?
I’ve started separating packages after seeing your post. (curious)
Home Assistant in a VM ( having trouble with network manager in a LXC )
Node Red and Mosquitto in LXC containers.
Restarts are insanely fast. Seems very stable.
If you are putting HA in a LXC, I would install core in a virtenv - works really well.
Other installs are tricky because of the nested virtualisation, though for ‘fun’ I didi install docker in a LXC and then HA in that and it worked.
I find the beauty of having mqqt, node red etc in separate LXCs is that rebooting HA won’t kill all other services. Also if HA breaks for any reason, then you don’t have to worry about all the config for the other services whilst you are fixing or rebuilding.
Rebooting HA doesn’t kill my other services, they are in separate docker containers, if I restart the HA container, the others will not be affected at all.
I think that’s actually pretty small if you take into consideration that it is a complete VM backup including databases and base OS.
And all of my almost 20 docker containers (I’m doing something wrong, I need more containers hahahahaha)
I agree - I think if you use Proxmox, it makes sense to have separate containers/LXC for everything, rather than add ons (mqqt for example) as they are independent for backup/restarts/upgrades and because…you can
I’ve said it a few times, no right way or wrong way really, just what suits you.
I’ll mention the ‘unsupported’ word that I see people get hung up on the forum again though (not in this reply to @Burningstone BTW) .
To me HA is personal - my HA is different from everyone on here with use/integrations etc. It is not like I have bought an iPhone and I flash some firmware on it so Apple refuse to help. This is an open source product, that has no cost to you and there is no warranty given with it. Install it however you want. I have learned loads this way.
I’m considering to switch to an LXC setup to, this whole docker thing starts to feel like an additional unnecessary burden and overhead. Might get back to you with some questions if I decide to go this way.
Joking aside, that is why I have steered from Docker for this, I just find that on the Proxmox host if you have a Docker KVM I allocate say 8GB RAM to it to run everything else…and it uses it regardless of the amount of containers… Multiple LXCs I allocate 2GB each and they use about 200-400mb each. My MQQT LXC barely gets above 100mb
My NUC has 16GB RAM so I can take the hit, but a previous Proxmox host I had (still use actually) was a HP Micro Server Gen 10 and it only has 8GB RAM. I would struggle on that.
That said, I do use Docker quite a lot at work, both on Physical hosts and a couple of KVMs, though the Proxmox host we use for the KVM has 256GB RAM so it can take it.
You can treat each LXC as a standalone machine - you can set it to a static IP or leave it as DHCP and it will pick it up from your DHCP server.
You can even assign a VLAN tag to it and if your network is set up it will get the VLAN assigned from DHCP etc. Really flexible, that is what I like about it. There is no messing around with the docker style networking on a LXC.
mb
EDIT: you can ssh to the LXC like you would a KVM. The nice part is I find if you mess up the network, you can jump onto the Proxmox host and just do a ‘lxc-attach <LXC_ID>’ and you are on the console (you can also open a console in the GUI, but you can copy and paste to this one)
Yep exactly that - my NUC has 16GB RAM and a quick total of my running machines shows that I have allocated 24GB of RAM towards my running machines, but only 7GB is being used.
As the LXCs all have completely separate IP addresses on your LAN you can have a web page on LXC1 and LXC2 on port 80 as to get to that webpage you just need to goto http://<LXC1_IP_Address> or http://<LXC2_IP_Address> for example. Just think of the LXC as a separate Ubuntu machine.
Basically, a LXC is a containerised OS and a docker container is a containerised application is the way I see them.
In the LXC you can install software, you need to run app update etc, in docker you have an additional virtualisation layer that puts together all the dependencies and components for the app and runs just the app if you get what I mean.
mb
EDIT: I am not sure if I have mentioned it, but to install HA in a LXC, use the virtenv install method here: