Kubernetes vs. Supervisor

With Longhorn you can setup ReadWriteMany volumes so that multiple pods from (even across nodes) can access the same volume. I have used it quite a few times and it works well. Basically, it spins up a volume and a nfs pod, then all the worker pods attach to the nfs share of the longhorn pod.

You could then for example have a pod running code-server that attaches to the volume and manage your HassIO config through a web based VSCode.

https://longhorn.io/docs/1.1.0/advanced-resources/rwx-workloads/

1 Like

@mzac Thanks for pointing this out. Didn’t know this was possible with Longhorn volumes. I learned something new today. Will try it out for sure.

1 Like

Seeing some interest in these things here and elsewhere, I wrote up a quick description of my HA-on-k8s setup and its manifests, relatively simple hand-rolled things:

Home Assistant on Kubernetes - Random Bytes (substack.com)

5 Likes

Thanks a lot for sharing your straight to the point setup, @cerebrate. Highly appreciated!

[skimmed over this ala TL;DR - reasoning seems solid]

I’m in the process of doing a bit of a ‘home-lab’ rebuild (I really don’t wanna run apps on my NAS), looking at clustering my HPE MicroServer & a handful of NUC’s I’ve scored.

Been tinkering with Docker/Portainer/Kubernetes, so moving on to MicroK8s for my next trick/frustration, as I want to build in a degree of fail-over redundancy and set&forget.

Ideally I’d like to get Supervisor stable in load-balanced/redundant containers, and deploy RPi’s as semi-autonomous ‘satellite’ nodes for sensing, control & relays.

Not sure if I’m barking up the wrong tree or letting myself in for a whole world of hurt & wasted evenings/weekends.

1 Like

That’s pretty much where I want to be because a lot of my projects are available as containers. It’s much easier to keep up with what you’ve installed and to cleanly remove something you don’t want any more.

Kubernetes seems like the way to go, especially because HA can be just a bunch of containers (JBOC?) but it’s been a bit of a slog getting it all happening and has moved to the backburner for me - more because of my ADD than anything else.

I have three to four nodes running Proxmox so the plan is to have a VM on each one dedicated to the k8s cluster so that the load and redundancy can be intelligently shared around. It’s quite a learning curve but I think it will be well worth it.

I looked at a bunch of k8s management services, including just installing it from scratch ([shudder] … don’t do that) and I ended up with Rancher, mostly because it was the first one that I got to work.

1 Like

I’m going back to the drawing-board on this; borked my Kubes setup pushing my learning.

Been using Prox since forever & think it’s an awesome choice for a VM stack.

But found of late (last 2-3 years?) I’ve hardly spun up any VM’s, so I really don’t need the overhead, so much of my effort has been mostly in the container space, and the momentum seems to be heading to Kubes. (in fact, the only ones I’ve spun up now are for nested containers)

I’m rebuilding my stack from the ground up, and tossing up which to use as my foundation: Ubuntu or Fedora - both with complementary pro’s & con’s.
I expect I may end up with a combination of Portainer & Rancher (not OS; that seems EoL’d) on these, backing onto a common iSCSI on FreeNAS/TrueNAS (for the ZFS)

I actually find that I’m using VMs more that ever with k8s experimenting. I have a template for an Ubuntu image with everything I like set up on it, and I can spin up a linked VM node in seconds when I’m adding/dropping nodes from the k8s cluster.

I also found that Rancher, while comprehensive, added a whole layer I had to learn and hindered my understanding of what was going on underneath. Not that hiding the details is a bad thing, but for learning how k8s works it tended to mask the internal workings.

I ended up rebuilding the whole stack with k0s which allowed me to easily start/stop/add nodes but create stuff just by applying YAML files. I found this to be a useful middle ground.

I’ve been obsessing about getting a stable base k8s setup with monitoring and redundancy so I haven’t got around to putting HA on it yet though :slight_smile:

1 Like

I’m also fond of virtualising the infrastructure under my k8s with Proxmox, Rancher, Kustomize.
I have a bash script that builds k8s nodes using Ubuntu cloud image, layering on vNICs etc., building out Cloud-init to install Docker, rancher and for some nodes, the requisite nVidia bits to do CUDA (Deepstack camera analysis, Plex transcoding & BOINC).
Very useful for running seperate Dev k8s clusters and testing how to do clustered CUDA, which took countless tweak/reboot/test cycles.
Home-Assistant itself, is still running on HassOS on it’s own VM, albeit on Ceph shared storage monitored and resiliently managed by Proxmox High Availability.

As I wrote in a blog post, using KVM VMs is also cool in that you can subtly adjust the hardware an operating system thinks it has. → https://overdr.ink/not-all-nvidia-gpu-accelerators-accelerate

1 Like

I’m juggling a number of learnings here, and quite a bit behind the curve.

Kubernetes certainly seems were things are going, indicative by number of tools & other resources seeming to have native support baked into it.

I’ve been steeped in the POSIX space for quite some time (Linux, FreeBSD), doing things ‘the hard way’ - building by hand so that I’m fully cognisant of all the moving parts - but momentum seems to be moving to cloud & hybrid-cloud practices. (thinking out loud here, general terms)

I exclusively use POSIX on the server-side, but a mix of Linux & W64 on the desktop; spending most of my days in browsers & terminals, so choice of desktop interface is pretty arbitrary.
Windows, VS Code, Azure, OpenShift, OVz, Prox, mk8s all makes this an attractive & largely unavoidable prospect, and mostly implies that some of the hardest work will be in terms of my own thinking (I’m a solo admin, not employed in Enterprise where I get to learn from coworker DevOps)

Other technologies are in the mix too, so I’m trying to incorporate this into my mental models:

  • CoreOS (and other immutable & ephemeral platforms)
  • Resin
  • Vagrant
  • Clustering
  • [adding others here later]

I know I’m making my life a lot harder for myself, but the reasoning is that this is a hands-on pet-project I can immerse myself in to gain familiarity to gain more holistic expertise.

Something I’m still having trouble squaring is something best described as master-slaves/server-client (gawd, I hate that term) models, where there’s a cental authoritative “Single Source of Truth” (SSoT), rather than a more meshed, distributed, fault-tolerant model (showing my own ignorance here).

My understanding is that Core is a subset Supervisor (?), with Supervisor managing the provisioning of apps & stacks on Core in containers, so in theory I should be able to use a single “master” Supervisor to manage Cores across Areas & Zones.
If either Super or Core becomes unavailable to one another, functionality should be able to carry on until connectivity is restored & sync resumed.

I’m splitting up my setup across distinct isolated networks, for security & performance reasons, e.g. have some heavy-lifting take place in the cloud - private on public - where I lack resources.

  • What would be some of my architectural considerations?
  • SQLite on my RPi Core end-points seem appropriate, but MySQL/MariaDB/PostgreSQL for super is advised (replicating data to off-site backup).
  • Supervisor on public Cloud if expanding beyond my home-lab rather than punch holes in my network, and which is authoritative, if any?
  • Can this distributed model be handled in HA, or do I need to stand up a hybrid-cloud with local & hosted Kubes?
  • How to ensure data integrity - for my Supervisor(s) AND endpoint node Core(s)?
  • Where do I have a complete lack of or misapprehension of what’s in play?

I know this is a big braindump; some of these I would popped in other chat forums, but local timezone makes such real-time comms difficult/impossible.
I’m sure some of this is also covered in documentation, so if anyone’s going to RTFM @ me, please do so with appropriate links to TFM.

[enough for now…]

1 Like

I have updated my HA-on-k8s setup in a few ways since that last was posted and had occasion today to write up a post describing those updates, so once again for those interested:

2 Likes

I feel Home Assistant is all about choice.
So why not build the Supervisor so that we have multiple choices there as well?

  • Default: The way it is today with its own internal orchestration
  • Advanced: Let the Supervisor manage its “things” in an existing docker/k8s instance (or whatever will be the container orchestrator in the future).

Today I run my HA in a VM. The comfort-layer the supervisor adds outweighs my need to put everything into k8s. But I would really love to have it both ways.

2 Likes

That would be great! Perhaps the k8s operator pattern would be a good way of supervising ha in k8s, where a ha supervisor operator would spin up pods/containers for various add-ons, etc

1 Like

Sadly the K8s-at-home team has made the decision to stop maintaining their helm charts. This doesn’t negate anyones comments in this interesting thread but it may give pause to those who might be planning to move forward with their charts.

Just use GitHub - bjw-s/helm-charts: A collection of Helm charts and a really interesting example using flux (GitOps approach) to install Home Assistant home-ops/helmrelease.yaml at d7cb8cca3c6d070264a6628a2ee41a826fa5db49 · bjw-s/home-ops · GitHub

Another option to throw into the low resource pool is podman. I’ve redeployed my ha and associated containers using kubernetes yaml descriptor files with podman pod play <yamlfile> you get the advantages of managing your descriptors on a single low power host without the overhead, and with the opportunity to migrate to a kube platform later.

I manage it with ansible Automate container management on Fedora Linux with the Podman Linux System Role - Fedora Magazine

How do you plan to connect to the LAN to have HA discover devices? mDNS etc…

That’s a good point.

I think Avahi can be set up to act as an mDNS proxy across network segments? Presumably one of these in the cluster would allow Kubernetes to see what is going on in the host.

You can use the host network on your pods:

https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/use-the-host-network

This comment is very underrated! After setting up K8s my small machine is already using 15-20% CPU without any services. It’s just too much for a power efficient setup so will go back to normal docker.