Moving to proxmox

I loved yellow but since I’m more expert in HA decided to scale

will install these VM

  • HA
  • MQTT
  • ZIGBEE2MQQT
  • Z-wave JS

My concept here is to keep the Zigbee and z-wave network outside the HA so that it can work independently and in parallel

also would install others VM such as Adguard,nginx,pfsense,cloudflared,frigate

what is the expert opinion in that?

Go for it!

1 Like

You are on the right track. Just size the system HW appropriately with enough RAM, cores and NVMe SSD so you do not run out of resources. Enable backup of all VMs and prepare a plan for when the system HW fails.

1 Like

I moved an HA instance 2months ago from Supervised on an old Intel Atom to Proxmox / HAOS on an Intel N100. Just made a backup on the Atom and restored on HAOS. Easy. No problems whatsoever.

1 Like

Go for it. I’d suggest you follow this great guide by kanga, which uses tteck’s amazingly easy scripts

2 Likes

In my opinion you are creating a lot of overhead and unnecessary maintenance work if you are planning on having separate virtual machines for every single thing when you can consolidate most of the stuff in one or a few machines.

3 Likes

fleskefjes is correct. Each time you add a VM you also use resources on just running the OS in that VM.
You have to judge each application and see how often they require an OS restart or other interruptions that can affect the other services on that VM.

I understand the idea to spread out the services though and I have moved some of my services from my HA VM to another VM, which when restart HA means the services are instantly ready when HA is prepared to contact them.
You may need to delay the start of the HA VM in proxmox, then the proxmox server is restarted, so the extra VMs can be ready before HA or you will get all kind of warnings and issues with HA about missing connects.

1 Like

Learn docker :wink:

2 Likes

That is on my to-do list, however I still don’t understand why people want to use docker for almost all and everything. Understandable for HA and its python dependencies, but for simple utilities ?
apt-get install mosquitto is so much easier then finding and configuring a mosquitto docker.
./upgrade.sh in my Zigbee2MQTT directory and Zigbee2MQTT is updated, instead of docker pull etc…

1 Like

The advantage of docker vs “plain” apt-get is that all the dependencies are built into the docker image, i.e. you could run an Ubuntu 16.04 and still be able to run the most recent version of all software without being blocked by dependencies (or having to rely on unofficial repository).

All-in-all, docker is actually easier to manage than apt, really, imho :slight_smile:

3 Likes

Docker provides most of the advantage of keeping applications separate, in their own separate environments, but with much less overhead than a full Virtual Machine.

My understanding is that HAOS uses Docker internally, running add-ons in separate containers … but HAOS hides all the Docker complexity, so one less thing for me to learn :wink:

I personally upgraded from a RasPi running HAOS to a used Dell OptiPlex mini-PC and running standard HAOS in a Proxmox VM. It was pretty easy to do, install proxmox per the guide, install x86 version of HAOS, and restore from by HA backup.

I chose to run my HA add-ons (MQTT, Rhasspy, ESPHome, node-RED, Samba) within HAOS because (a) my system is pretty small, and (b) HAOS does a good job of managing everything. However it is nice to know that as my system grows I can choose to setup any of these or other services in their own VM. For example, I anticipate running two Voice Assist streams for Italian as well as English, and this may require a separate VM for the second language.

What i’m saying is that I agree with fleskefjes and Wally - unless you have a pretty big Home Automation system, or have special needs, I would only setup VMs for those services which are worth the effort; and let HAOS manage Docker containers transparently for the lesser services.

1 Like

i would not consider docker a full replacement for VM.
I moved to Proxmox some months ago, i have a mixed solution of VM, LXC and docker (inside VM or LXC). Not the same thing.
It’s clearly, as someone said, “a lot of overhead”, but at the end of the day that’s the fun of being a nerd :slight_smile:
However, i fully agree (fully) that having proxmox does not mean you have to make a VM (or a LXC) for everything. A lot of things can be kept together. I basically followed the path of network segregation to decide how many VM/LXC create.

(i have a VM for openwrt - managing routing and vlans -, one for HA - stands in iot vlan - and one for openmediavault -stands in lan.
i have a LXC for download tools (in a segregated VLAN) and a LXC for media tools (like jellyfin) that stands in the IOT vlan

1 Like

Docker (especially docker-compose) allows you to setup “stacks” of grouped applications that you can tear down/set up with a single command. You don’t have to rely on remembering if you installed something through apt or yarn or npm. One command like docker-compose pull && docker-compose up -d --remove-orphans pulls the latest images and restarts the upgraded containers automatically. Plus, you can also use tools like Portainer to monitor stats easily. You can also have multiple docker-compose files that extend each other to have one fully fleshed out “smart home” stack.

My current setup has z2m, zwjs, and govee2mqtt in a single stack, mariadb, influxdb, dozzle and some db tools in another stack and HA, an emqmqtt cluster and frigate in another stack. My last stack, my “management” stack has portainer, watchtower and crashplan pro images. Each stack can be torn down, moved and relaunched with a single command. They are all extended so that when I run my docker-compose up -d command, it starts everything needed in order and with dependencies (ha starts after mariadb and influxdb have started, etc).

Should my server die, all I need to do is grab the latest backup of my /srv directory (where all the configurations live), restore it and run my docker-compose up -d command and I’m back up and running.

1 Like

I would also disregard Proxmox and go with containers.

Proxmox has advantages which warrant it as the only solution for them, but for most people docker containers are a more efficient setup.

1 Like

Definitely not, but the only “proper” use case for me is if a different OS than Linux + GNU is needed (or a specific kernel).
Running multiple debian-based VMs, by instance, is hard to argue for vs. multiple docker containers.

1 Like

Docker is application containerisation. LXC is system containerisation. We are not talking about KVMs which most people associate with VM tech. An LXC is as resource efficient as Docker.

From a sysadmin perspective managing/troubleshooting LXCs is vastly more efficient than managing/troubleshooting docker containers.

With Docker you have all eggs in one basket (the Docker basket). Scaling LXCs is also much easier.

I understand the Docker argument if you have a resource constrained system which cannot run Proxmox. But if the HW capability is there for Proxmox, Docker does not make too much sense, unless you have to use Docker or you worship Docker.

Going the Proxmox route is going the Debian route, so tooling is unified and should be familiar to more people compared to Docker.

Plus networking under LXC is a breeze. Docker under Proxmox just ads an unnecessary layer of complexity and overhead.

2 Likes

“Docker” is kind of a generic name for containerization, nowadays. Actually, usage of docker proper is probably shrinking vs. other containerization solutions like podman, or whatever kubernetes is using today. I’m not familiar with LXC, but what I read makes me think it is a similar containerization solution.

The argument is really VMs (as in multiple kernels, multiple OSes, separate storage) vs. containers (as in same kernel, multiple OSes, shared storage)

1 Like

All HA add-ons can run in an LXC. The issue is are LXCs better than Docker?

In most cases LXC is superior to Docker as it allows granular deployment of whatever Linux system without the overhead of the kernel. For example you can run InfluxDB in an LXC and Grafana in another LXC or you can run both in an LXC without too much fuss. It takes minutes to do with not much to think about. With Docker you need to put effort to get this going.

From a sysadmin perspective the only drawback of LXCs is the sysadmin overhead, updating LXCs every now and then in order to update the underlying OS (Alpine/Debian/Ubuntu/etc). This can be automated but it is usually beyond the ability or patience of most people.

It is not valid to compare KVM with Docker as you do not need to run KVM, except if you want to run Debian/Ubuntu/etc Desktop, Windows, HAOS, pfSense/OPNsense and the like. At least the flexibility is there which does not exist under Docker.

And the LXC networking is a systems task well understood and easy to access by everyone. For most people Docker networking is like black magic.

To summarise Docker is a limited platform, applicable to primarily development and resource constrained environments, but not much else today. Kubernetes is vastly superior to Docker but not applicable to the HA crowd.

For anyone in the HA community paying attention to this conversation, it is vastly easier to manage LXC rather than Docker containers.

As I said, no clue about LXC, but for docker (-compose) it’s pretty much “docker compose pull && docker compose up -d”

1 Like

if not that getting HA to run in a LXC is a nightmare (i ended up with a proper VM when i tried to get bluetooth to work in HA in a LXC)
for the rest, i quote every single word you wrote :slight_smile: (even if i have some docker containers here and there, for example i have NextCloud as a dockerized container inside openmediavault, that is a VM)