To Proxmox or not to Proxmox

It’s multiple LXC’s which use far less resources than a VM

A few reasons I like to decouple (pros):
No supervisor
A gone wrong HA update won’t bring everything crashing down
Uses less resources
Critical automations continue to function without HA running (MQTT, Node-Red and Z2M)
Each LXC container will have its own IP address
Individual backup of each application
Troubleshooting

That’s just naming a few off the top of my head :wink:
And for me, it has no cons

1 Like

Thanks for the comparison. Some good info for me to consider. thx

^^

This, this , this.

IMO worth the extra effort. Been running all my services in separate LXC for about 2 years now - it really leverages the use of Proxmox.

I am currently experimenting with running some of the supporting services in Kubernetes before looking into Kubernetes for Home Assistant. I have two physical Proxmox servers at home, so could spread the pods across the two for real fault tolerance. My HA doesn’t have any USB devices so I think this would work, but need to look into the Deconz setup more as that passes through USB to a LXC at the moment,

@tteck do you have much experience with K8s?

1 Like

No, but I do run Home Assistant in a pod with Podman. I’m going to add (just playing around) all application containers to that pod then export the pod.yaml file for Kubernetes. Podman and Kubernetes use the same pod format.

1 Like

I have all this running on a i3 NUC with 4gb Ram. Before the migration to containers all the system memory was in use…

Now it has been cut in half and I don’t have the urge to buy more RAM.
I thought that an example with an image worth thousand words.

1 Like

@tteck @markbajaj K8s instead of Proxmox HA?
I understand you mean two or more Proxmox servers not necessarily in a cluster each running a complete VM with k8s?

Yes, normally K8s runs in the cloud in AWS, Azure etc, but you can run on ‘bare metal’ on premises. At home I have 2 separate Proxmox servers that are not clustered, but by having a Master K8s and 3 nodes spread across the two servers (2 on one and 1 on the other) and having a requirement of 3 pods up for each service, this will split it across the hardware (or move all pods to one in the event of hardware failure) if you see what I mean.

For myself, the reliability and flexibility of Proxmox is enough. Each of my containers (LXC) are backed up daily to a USB SSD (7 day prune). Weekly backed up to Google drive (1 backup prune) and my app data is synced hourly with Google. All automatically done with rclone.

One disadvantage of using several LXC containers is that you have to update all containers separately. If you don’t automate this it’s a whole bunch of work.

I don’t know if I’d say a “whole bunch” of work, but some, yes.
I recommend installing Webmin on LXC containers for the GUI and automatic daily security updates.

Very interesting discussion, that led me rethink about my actual classic design (HA OS with everything managed by supervisor). I don’t like the fact that when I restart HA basic “services” like mqtt, nodered, influxdb stop too…so I’m almost convinced to isolate them.I have proxmox 7 installed and I migrated my intel nuc bare metal HA OS to a VM in proxmox. Everything working fine except for GPU passthrough, couldn’t make it work, so I had to stop Frigate for now.

Aside from that, I’m trying to decide the best way to start separating the main services (influx, nodered, mqtt…): is there a way to easily migrate data? Also, I heavily use nodered companion to create/update sensors in HA: would that still work if I separate NR in its own LXC container? Will I lose these kind of “deep integrations” in HA separating these services?

Also, protection of data is important, HA snaps are really easy, and they work well…separation will require attention to the backup strategy…

There are pros and cons…but redesigning these things is the fun part of this hobby…

Thank you @tteck for all those scripts…great work. I noticed Nginx, but I currently use traefik as a reverse proxy, currently running in docker, but I’d love to have that as an LXC container too. Would it be hard to adapt nginx script to traefik?

1 Like

This might be a bit off topic but…

Can anyone explain why I can log into my Ubuntu VMs using a name and password using the Proxmox console but if I try to SSH using the same user/password from Windows Powershell I get ‘permission denied’?

Am I missing something fundamental?

Thanks.

Enter the VM using proxmox console and ensure the sshd service is present and running.

systemctl status sshd

I discovered today that the service completely disappeared from 4 of my containers without explanation.

@tteck Just one question on your zigbee2mqtt container:
I’ve raised that container some days ago and today built a sonoff zbbridge for it.

Does it need anything apart from it’s own yaml configuration file setup for it to work with the HA lxc?

Nothing has appeared on HA, but it’s true that i still don’t have any zigbee device. Should it appear only then or needs an addon?

On the other hand I don’t know if I’m comfortable with it first “taking” with the z2m container and then the z2m “talking” to HA through mqtt.
Shouldn’t be better anything on the bridge directly taking to HA? ZHA?

Of course a MQTT Broker and the MQTT Integration setup in HA

When you add a Zigbee device, HA will discover the device via MQTT auto discovery

Thanks for the response but SSH is running.

image

I still can’t login via Windows Terminal…

PS C:\Users\Me> ssh [email protected]
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:

I’d be grateful for any other suggestions from anyone?

Yes, then edit the /etc/ssh/sshd.conf file and activate/uncomment user login with password.
Just take a look at it to see what could be inactive. (I’m not at my computer ATM)
Then systemctl restart sshd

1 Like

Already setup and going on. Just thought that something had to appear even if no zigbee devices are present. I’ll wait for them, then.

1 Like

Look at what you guys have done… :laughing: (@tteck @markbajaj)

4 days of P2V consolidation, migrating all data from sparse dbs and sparse dockers, in one central “server” (1 mariadb, 1 postgres, 1 influx, etc.). Took a while to migrate all data and testing everything, but it feels amazing having everything in one bucket. Easier to manage and protect.

And all this, on a reconditioned €250 mini-pc of 2016, with a modest cpu, 16GB of RAM and a 1TB M2 NVMe drive.

image

I’m still on hassos in a vm: would love to switch to LXC for HA and get rid of supervisor, but there are some addons that I can’t bring out yet. Will see in the future…

Thank you so much for inspiring me, I’ve learned a zillion things last week, while having a lot of fun…and liters of coffee…:slight_smile:

Ciao,

Alessandro

2 Likes

You have surpassed me in everything except Kernel Version :joy: