Kubernetes vs. Supervisor

Kubernetes adoption would be an easy sell if it weren’t such a resource hog. Even the edge solutions (k3s, microk8s) take up a 1/2 GB of RAM and 10-20% of CPU before you run a single service. That’s just plain ugly if one is on a 1GB or 2GB SOC (which is a ton of the homeassistant base).

I’m inclined to agree; it’s pushing it on a low-end Pi.

That said, perhaps HA has outgrown the smaller Pi being the baseline? The HA Blue has four cores and 4GB of RAM much like a Pi 4. I think (without any numbers to back it up) that a lot of people who get serious with HA end up having to move to a gruntier machine once they start stacking on features. Now that HA is maturing a bit, this migration seems inevitable given all the new juicy addons.

Perhaps the lower end devices should be recommended for a basic introductory setup, but once you commit, you need to move to the Serious Level; more RAM, more cores, enough to run Kubernetes.

@ianjs Well said. I agree 100%. There should be choice of platforms for different types of HA users…from the low to high end. Also, kubernetes is becoming the de-facto container orchestration standard…would be good for HA development to get ahead of the curve. It shouldn’t be a choice of supervisor vs kubernetes…but rather to add kubernetes as a supported platform.

2 Likes

I might have another poke at Rancher. I kept hearing that was the smoothest way to go, but I got stuck somewhere and went back to hacking on Proxmox to get some of my containers running. Your setup certainly looks like where I wanted to be.

I definitely need some “orchestration” now that HA has become an essential service - downtime is not acceptable and impinges on the WAF for future projects :smile:

I use microk8s for my cluster with 3 arm64 master nodes (odroid N2) and 2 x86_64 workers (Intel NUC clones). With version 1.19 they added high availability as the default (once 3 nodes are available it activates automatically) so I do not have to so any special setup for it. They support Metallb as load balancer and Multus to access the host network without conflicting with the host ports.

It is running quite stable for all my needs but tt does not support arm 32 bit OSes. My test cluster is on ODROID-HC1 so I am contributing support for armhf to microk8s if someone is interested: https://github.com/ubuntu/microk8s/issues/719 (pull request being worked out with Canonical)

Is anyone willing to share his Kubernetes manifests / configuration? I am mainly interested in the base setup with things like load balancing, ingress, certificate handling.

Btw, if someone wants to experiment with Kubernetes, I can highly recommend Civo with its #kube100 project: it offers k3s for free ($80 monthly credit - enough for a 3 node medium size cluster) while in public beta: civo.com (referral link). It is their vision to offer developer friendly Kubernetes services and eco system.

1 Like

Hi @davosian, I’ve been running home-assistant (along with node-red, mqtt server, zwave2mqtt, etc) in a kubernetes cluster for about 1.5 years with great success. Most of those components are deployed as helm charts from the k8s-at-home charts repo, using the gitops approach via flux2.

If you’re interested, my home-assistant kubernetes configuration is located: https://github.com/billimek/k8s-gitops/tree/master/default/home-assistant.

There’s a fairly active discord community I belong to dealing with all things kubernetes at home, but I’m not sure on the rules of promotion in this forum so I’ll avoid linking it unless that’s ok to do so.

5 Likes

I’d recommend a couple of good youtube videos and the accompanying github documentation on installing k3s High Availability and installing Rancher. It was pretty easy to follow and was easier than I thought it would be.

Damn. There’s an overwhelming number of ways to slice Kubernetes.

Every time I pin one down, someone points to something like Flux and I think “Yeah, that’s awesome… let’s do that”. Thanks for the pointer - I’ll check it out.

@billimek, these are great resources around k8s-at-home. Thanks for sharing! Also the discord community is linked inside your profile, so I was able to join.

I am thinking of setting up a 3 node cluster (Intel NUCs) with k3s running on Proxmox. I am currently waiting for the hardware to arrive (next week) to get started.

In the meantime, I will brush up my Kubernetes know-how with the Youtube course from @taylormia. Thanks for sharing :slight_smile:

I think this is exactly the biggest challenge: there is not one best way. There are many different paths one can take with many options to choose from and it is easly to get lost in the jungle.

And when I was looking at which GitHubs model I pick for my cluster and updating my charts I get your post - very timely!

I specially like the idea of those projects that are of a kind of template to deploy your own K8S cluster with HA on top. They really help to reduce the inital step learning curve to addopt K8S for HA.

For my first cluster I had to spent weeks writing/tuning Ansible scripts around kubeadm and now with microk8s it took me just one day to automate. This is they way to make this available to more people and also be easier to maintain (so we can create more charts!)

@taylormia I notice that you use Longhorn to share the database, but an NFS bind mount for the config directory.

Could you have used Longhorn for both? Or was that just the way your setup evolved?

@ianjs Yes, you could use Longhorn for both. The reason I don’t is that with a Longhorn volume - I haven’t found a way get out of band access to the files on that volume. So, for example, if I wanted to add or delete config files - I couldn’t. I don’t need to access the recorder database - so it’s fine located in a Longhorn volume and it also prevents corruption since it’s not on a NFS bind mount.
If you happen to find a way to access files in a Longhorn volume from outside HA - please let me know. I have tried installing an NFS and SSH server in the same as a sidecar app in the same workload as HA - but it doesn’t work consistently.

With Longhorn you can setup ReadWriteMany volumes so that multiple pods from (even across nodes) can access the same volume. I have used it quite a few times and it works well. Basically, it spins up a volume and a nfs pod, then all the worker pods attach to the nfs share of the longhorn pod.

You could then for example have a pod running code-server that attaches to the volume and manage your HassIO config through a web based VSCode.

https://longhorn.io/docs/1.1.0/advanced-resources/rwx-workloads/

1 Like

@mzac Thanks for pointing this out. Didn’t know this was possible with Longhorn volumes. I learned something new today. Will try it out for sure.

1 Like

Seeing some interest in these things here and elsewhere, I wrote up a quick description of my HA-on-k8s setup and its manifests, relatively simple hand-rolled things:

Home Assistant on Kubernetes - Random Bytes (substack.com)

5 Likes

Thanks a lot for sharing your straight to the point setup, @cerebrate. Highly appreciated!

[skimmed over this ala TL;DR - reasoning seems solid]

I’m in the process of doing a bit of a ‘home-lab’ rebuild (I really don’t wanna run apps on my NAS), looking at clustering my HPE MicroServer & a handful of NUC’s I’ve scored.

Been tinkering with Docker/Portainer/Kubernetes, so moving on to MicroK8s for my next trick/frustration, as I want to build in a degree of fail-over redundancy and set&forget.

Ideally I’d like to get Supervisor stable in load-balanced/redundant containers, and deploy RPi’s as semi-autonomous ‘satellite’ nodes for sensing, control & relays.

Not sure if I’m barking up the wrong tree or letting myself in for a whole world of hurt & wasted evenings/weekends.

1 Like

That’s pretty much where I want to be because a lot of my projects are available as containers. It’s much easier to keep up with what you’ve installed and to cleanly remove something you don’t want any more.

Kubernetes seems like the way to go, especially because HA can be just a bunch of containers (JBOC?) but it’s been a bit of a slog getting it all happening and has moved to the backburner for me - more because of my ADD than anything else.

I have three to four nodes running Proxmox so the plan is to have a VM on each one dedicated to the k8s cluster so that the load and redundancy can be intelligently shared around. It’s quite a learning curve but I think it will be well worth it.

I looked at a bunch of k8s management services, including just installing it from scratch ([shudder] … don’t do that) and I ended up with Rancher, mostly because it was the first one that I got to work.

1 Like