Kubernetes vs. Supervisor

the easiest one out there now is https://k3sup.dev

3 Likes

I think that there are good points for both sides, yet whether it is a good idea or doesn’t it is worth at least trying, even as a seperate project, namely, unrealated to the main Hass project.
Is there someone who tries to make that happen? If so please let me know as I would like to contribute.

1 Like

It is a pleasure to read this thread :grinning:

Technical agurments for k8s are right for me. As a developper, I think a proprietary solution is less evolutive than “standardized” solution, I prefer to focus my energy on added values rather than to reinvent the wheel, even if it is a littlle more complicated at th beginning.
But I would add one argument: The usability

I deployed a cluster of RasPi with k3s, Rancher and Longhorn (try the package it is great :heart_eyes:). I installed on it lot of stuff for development, but for my own use too, like a media center, a mail server, file sharing, …
I tryed HA on it, and effectively, addons are missing.

So I didn’t need to deploy an other cluster specifically for HA (it’s time consuming and it costs), so I use a simple Raspi with a classic installation (without docker)

So the question should be: who need a specific cluster installation for a home automation application?
If it can’t be integrated in your existant cluster, use a standalone device

However, I really like HA, guys, you made a great job, and I’ll really love to integrate it in my k3s cluster

1 Like

At least in spirit, I agree with this. I’ve never run any of the “supervised” flavors of Home Assistant for this very reason. Sure, I “miss out” on add-ons, but it’s worth it. k3s is SO easy these days.

And yeah, any path the developers take could result in a dead-end road in the future. Docker is dying, despite that being the current chosen path. But, at the very least, k3s does almost everything Home Assistant needs a “supervisor” to do, and it does so in an open, standards-compliant way.

My only uncertainty is, now that Supervisor is developed and presumably does what it needs to do, is all the effort to change over to a k8s based architecture worth it. There are a lot of pieces to the puzzle that have to be just right to land on a stable, easy to use platform. Anyone who is capable of making the conversion to k8s likely doesn’t really NEED Supervised Home Assistant or Home Assistant Add Ons anyway, as they are skilled enough to spin it up the “old fashioned” way (as both you and I have done). So the real benefits are to the maintainer of the Supervisor, as well as to the community of potential Add On developers since they’d have a standard platform to work with instead of Supervisor.

Would be nice if HA supported Containerd (k3s) install without docker.

1 Like

I’ve recently moved my Home Assistant from Docker to Kubernetes. I’m now running Home Assistant (and other apps) on a two node bare metal k3s High Availability cluster with no major problems. k3s High Availability requires an external MySQL database and runs on a VM. I use HAProxy on a pfSense firewall to load balance between the two k3s nodes.
My HA config directory is on an NFS share bind-mount and I have a separate Longhorn persistent volume for the HA recorder/history sqlite database.
The k3s Rancher and Longhorn UI makes deploying and managing application workloads extremely easy. And, high availability works great. If one of the nodes is unreachable for 5 min - the applications on the failed node spins up automatically on the surviving node.
Here’s a screenshot of the Rancher UI showing the workloads as well as the HA workload configuration:

3 Likes

Hey @taylormia, this is exactly the setup I am currently planning for! Can you share some details on the hardware you are using?

Also, are you using zigbee or zwave? If so, how are you getting the bridge connected to your cluster? I am currently using zigbee2mqtt (no zwave), so I should be fine if I was to run this on a pi instead of running it inside the cluster but I would prefer to include zigbee2mqtt into the cluster. Maybe this would be an option? https://community.arm.com/developer/research/b/articles/posts/a-smarter-device-manager-for-kubernetes-on-the-edge

@davosian My cluster nodes are eight year old Xeon based servers running Ubuntu 20.04. I have two Z-Wave networks connected to my HA. One with about 60 devices is on a HomeSeer ZNET (which was the home automation solution I used before migrating to HA). I still use the ZNET and use a custom HS3 component to bridge the ZWave events into HA. I also have a 7 device ZWave2MQTT network on a Aeotec Zwave Stick that is connected to a Pi-clone running Docker. I have no plans to use ZWave directly on my k3s cluster. I have heard that you could attach ZWave hardware to a k3s HA workload if you exposed it to the host network - but I have not tried that.

Interesing setup, thanks for sharing @taylormia! Mine will be similar but instead of zwave2mqtt, it will be based on zigbee2mqtt. I will most likely start with a raspi for running zigbee2mqtt, but I am not too crazy about this idea since it will pretty much be the backbone of my setup and at the same time will be my single point of failure. I guess I have to think it through some more…

I’ve also been considering switching to Kubernetes, so this is very encouraging.

Was there some reason you didn’t use the load balancing built into k3s, or have I got that wrong?

Did you consider an external database for HA rather than Longhorn/sqllite? MariaDb seems to be supported, but I’m sure MySQL would work as well. Or was there another reason that Longhorn was more suitable?

The built in klipper LB exposes the host IP and ports for the pods and services on each host. You still need an external load balancer to balance traffic to the exposed IP on each node. In a cloud environment the provider will provide/charge for a LB. In an on-premises environment an option is an external bare metal LB like an F5 etc. In a home lab, a software LB like HAProxy or Nginx will suffice. Another great option is to deploy MetalLB on the cluster - which exposes an IP address within a configured range. This solution uses gratuitous ARP to advertise the IP address externally but in a fail over scenario can be much slower than an solution like HAProxy.
I prefer using Longhorn because it is easy to set up, is redundant since it replicates the volume and failover works very well. My local storage on each node using Longhorn is a fast 2TB SSD. I don’t need anything more than SQLite as a DB. I’m using a Longhorn persistent volume because the HA recorder/history database can get corrupted if located on the bind-mounted NFS share - where the rest of my config files are located.

Kubernetes adoption would be an easy sell if it weren’t such a resource hog. Even the edge solutions (k3s, microk8s) take up a 1/2 GB of RAM and 10-20% of CPU before you run a single service. That’s just plain ugly if one is on a 1GB or 2GB SOC (which is a ton of the homeassistant base).

I’m inclined to agree; it’s pushing it on a low-end Pi.

That said, perhaps HA has outgrown the smaller Pi being the baseline? The HA Blue has four cores and 4GB of RAM much like a Pi 4. I think (without any numbers to back it up) that a lot of people who get serious with HA end up having to move to a gruntier machine once they start stacking on features. Now that HA is maturing a bit, this migration seems inevitable given all the new juicy addons.

Perhaps the lower end devices should be recommended for a basic introductory setup, but once you commit, you need to move to the Serious Level; more RAM, more cores, enough to run Kubernetes.

@ianjs Well said. I agree 100%. There should be choice of platforms for different types of HA users…from the low to high end. Also, kubernetes is becoming the de-facto container orchestration standard…would be good for HA development to get ahead of the curve. It shouldn’t be a choice of supervisor vs kubernetes…but rather to add kubernetes as a supported platform.

2 Likes

I might have another poke at Rancher. I kept hearing that was the smoothest way to go, but I got stuck somewhere and went back to hacking on Proxmox to get some of my containers running. Your setup certainly looks like where I wanted to be.

I definitely need some “orchestration” now that HA has become an essential service - downtime is not acceptable and impinges on the WAF for future projects :smile:

I use microk8s for my cluster with 3 arm64 master nodes (odroid N2) and 2 x86_64 workers (Intel NUC clones). With version 1.19 they added high availability as the default (once 3 nodes are available it activates automatically) so I do not have to so any special setup for it. They support Metallb as load balancer and Multus to access the host network without conflicting with the host ports.

It is running quite stable for all my needs but tt does not support arm 32 bit OSes. My test cluster is on ODROID-HC1 so I am contributing support for armhf to microk8s if someone is interested: https://github.com/ubuntu/microk8s/issues/719 (pull request being worked out with Canonical)

Is anyone willing to share his Kubernetes manifests / configuration? I am mainly interested in the base setup with things like load balancing, ingress, certificate handling.

Btw, if someone wants to experiment with Kubernetes, I can highly recommend Civo with its #kube100 project: it offers k3s for free ($80 monthly credit - enough for a 3 node medium size cluster) while in public beta: civo.com (referral link). It is their vision to offer developer friendly Kubernetes services and eco system.

1 Like

Hi @davosian, I’ve been running home-assistant (along with node-red, mqtt server, zwave2mqtt, etc) in a kubernetes cluster for about 1.5 years with great success. Most of those components are deployed as helm charts from the k8s-at-home charts repo, using the gitops approach via flux2.

If you’re interested, my home-assistant kubernetes configuration is located: https://github.com/billimek/k8s-gitops/tree/master/default/home-assistant.

There’s a fairly active discord community I belong to dealing with all things kubernetes at home, but I’m not sure on the rules of promotion in this forum so I’ll avoid linking it unless that’s ok to do so.

5 Likes

I’d recommend a couple of good youtube videos and the accompanying github documentation on installing k3s High Availability and installing Rancher. It was pretty easy to follow and was easier than I thought it would be.

Damn. There’s an overwhelming number of ways to slice Kubernetes.

Every time I pin one down, someone points to something like Flux and I think “Yeah, that’s awesome… let’s do that”. Thanks for the pointer - I’ll check it out.