Running supervisor in (Docker) container

Hi folks,

context: I’ve had a power outage, causing some hardware damage & my technical debt has caught up with me. Re-architecting/rebuilding my stack.

I’m looking for a straingt-up basic/standard method of standing up a Supervisor/Supervised instance INSIDE a Docker container instance, and NOT on the dom0 host.

What I’m trying to achieve is a sort of distributed architecture.

  • I have a segregated network zone for all IoT - piping though a OpenWRT proxy/gateway - so that have visibility & control over what’s transiting my network
    • A basic HA instance on a RPi on the IoT network to talk ‘locally’ to nodes
  • The above HA instance is controlled or act as a worker-relay to the primary Supervisor on my LAN/lab - presently running in a VM

What I’m hoping to do is something along the lines of:

  • Kill off my VM stack (HA is the only stack I still have running in a dedicated VM), as I don’t need the resource overhead
  • Roll Docker (ala Portainer) on my server & a NUC
    • HA/Super runs primarily on a dedicated NUC (in Docker swarm container), but with fail-over redundancy on server
    • said HA is managed by by Docker swarm setup
  • The other HA (non-super) on a RPi in my IoT-zone has a similar redundancy on my container server

I appreciate that HA/Super already runs on a Docker setup, so this should be able to run nested.

I do not want HA/Super to run on my server/dom0, but be managed (as a host) by my setup.

How can this be achieved?
Is there a docker-compose.yml available somewhere to spin up a Super instance?

Help would be appreciated

You can’t

No, because the Supervisor is a Docker manager, you don’t manage it - it manages your system.

The ADR explains what is supported.

Thanks for this, @Tinkerer.

I certainly appreciate & respect this position/architecture, but running Docker nested (DinD) is not unusual or beyond the pale.
I don’t want any app/stack to gain privileged access to my server - no exceptions.

Considering that my Super represents a SPoF, I’m trying to build in some resiliency.
Realistically I expect I’ll ultimately virtualise or LXC/LXD HAOS/HASSio.

IINM, even base (sans-Super) HA is containerised, so wanting adding the Super functionality isn’t such a stretch.
Heck, it’s virtually right there!

Even with the own risk caveat, how-to?
(I’m using the devcontainer profile as a jumping-off point)

1 Like

If you don’t run Supervised exactly as documented it’s likely to break. The further you stray from the requirements the more likely that is, as a very many people have found out the hard way (it’s no fun waking up to discover that HA isn’t running any more, and won’t start).

You’re going to be a lot better off managing your own Docker stack and using a Container install.

If you want to go ahead then nobody will stop you, it’s your system after all.

This is totally fair!

I’m certainly not intending to use this as my “production” instance, but rather I’m looking at exploring options for HA/clustering/redundancy/fail-over ala swarm.

My ‘Primary’ will be baremetal on a NUC, and looking at peered instance running in a container for when (not if) it goes offline/unresponsive.

Looking at the Installation RTFM, the only(?) way to really run Super as intended is on Linux - makes sense - but I guess what I really want as my starting-point is the ability to run Add-ons on my container instance, since I don’t trust 3rd-party components as far as I can throw them (hence Super being a Container admin)

My present setup is running a “privileged” Super’d VM in my trusted network, with a hardened Core on a RPi in my IoT zone. The Super is managing the Core, which I treat as an ephemeral/immutable satellite.

I apologise if this all seems a little bit rambly; I’m ‘rubber ducking’ a bit here to work through some of my thinking before committing time & effort to a BIG, full-stack rebuild of my entire infrastructure I’m about to undertake.
Some experimentation/r&d will be done, but is prudent to identify some dead-ends before starting down any particular.

Some of this comms may be better done via the Discord channels, but I’m limiting my s/n usage (for mental health reasons), and because I suspect some of this may be of use to others in future.

Some of your comments & suggestions have been most helpful, @Tinkerer, and I am taking it on-board

UPDATE:

My hope was to achieve HA for Super was via Docker Swarm, but if the container doesn’t wanna play ball, yet, the smarter choice for now may be to take a step back, so I’m looking lower-level.

The server I’m presently running & planning on rebuilding is a Proxmox.
I had hoped to not need to run a full VM stack on my NUC, as I would’ve preferred avoiding the resource overhead for what is essentially a single app box, but it may have the cost:benefit in its favour.

I should be able to a achieve the outcome my having PVE manage the HA Clustering complexities for me, since this is at the limits of my expertise.

Thanks again, @Tinkerer, for your comments & suggestions.

If I manage to pull this off, I’ll try to remember to come post lessons & outcomes

1 Like

well, the problem is that Supervisor basically requires to have unconditioned and sole control of the HW it runs on, which imho is a tad pretentious and unreasonable just to provide update monitoring and spinning up a couple of extra containers on the side.

I’m running it on a pi4 with 4G of ram, and once I moved the Recorder from sqlite to Maria DB (DBMS not running on the same HW with HA) the “average” of the loadaverage for the last year was:

load average: 0.45, 0.30, 0.26

so since I was “waisting” a PI I added it to the swarm rather than buying another one and now Supervisor is all very pissed at me.

IDK, having Supervisor playing nice with swarm (or kube, or whatever else) would only be the sensible thing to do.

The Supervisor is a Docker manager, mixing multiple Docker managers is a good way of causing yourself problems.