How to run home assistant supervisor in a docker container

I guess I don’t know what addons you plan to use and how (other then frigate obviously). Like I’ll run down mine, maybe that will explain my logic. Of the ones I use with a UI (some have no UI at all), these provide a UI that I would only ever want an admin (i.e. me) to see:

  1. Adguard Home (DNS server)
  2. Grafana* (analysis and visualization of HA data)
  3. Home Assistant Google Drive Backup (backup scheduling/management)
  4. Node RED (alternative automation platform)
  5. phpMyAdmin (raw view into HA DB, use occasionally for debugging tough issues)
  6. SSH & Web Terminal (HA management/cli)
  7. Studio Code Server (HA config in IDE)
  8. Zigbee2MQTT (Manage zigbee devices)

I strongly prefer admin-level tasks can only be done from within my LAN. So even though I use HAOS and ingress I actually have my cloudflare firewall set to block access to the ingress endpoint for these. Perhaps I’m being over cautious but at least two of these come with a terminal that can do anything so I’d just prefer using them requires I be on my LAN first, either physically in my house or via VPN.

The others I use with a UI are these:

  1. Hedgedoc (doc and notes tool)
  2. Sharry (file sharing)
  3. Vaultwarden (password manager)

These are not admin only. People other then me have accounts they log in to see/manage their passwords, notes and files. Therefore none of these have ingress options because ingress doesn’t really work with user level applications. When an addon uses ingress everyone looking at it sees the same admin-level user account. Which user is logged in on HA and what role they have is not translated to the ingress application. If these 3 used ingress then everyone would see and manage the same passwords, notes and files, not really a workable solution. So these all are managed like standalone applications and accessed outside of HA (port forward, reverse proxy, cloudflare firewall in front blocking off external access to admin features).

Btw I put a star next to Grafana because that’s kind of the one odd one. If the logged in HA user and their role was passed to the ingress application then that’s the only one I would probably treat differently. Potentially I (as the admin) would make some dashboards and things and then other users in my house would just see those as view-only when they clicked Grafana. But since currently anyone that clicks Grafana gets the same full and complex admin view it stays hidden from everyone but me.

I should note that you will probably have the same problem with Frigate. If using the addon then most likely everyone in your house that can click the Frigate option in the side nav will see the same admin view. Probably with full permission to make settings changes, delete videos, turn off cameras, etc. If you want view-only access for everyone but yourself then ingress won’t work for you anyway.

The VPN reliability issue is due to the home being on a triple NAT, implementation cannot really fix it unfortunately. But I manage it alright with the 2 together. It’s more about the inconvenience of it. I’m able to do it now, but I want to be able to do it more conveniently.

Implementing a version of Ingress is definitely an option, though I still think figuring out how to convert Supervisor to a container is a more appealing & potentially easier & more useful solution for me to attempt. But I’ll look into both.

That’s exactly what I was looking for. Thank you. I still don’t know what Supervisor does to determine which of those ARCH or FROM argument to use, but at least I know where those Variable strings are coming from, which is a big help.

I already knew about those for ENVs I just didn’t know it was handled in supervisor rather than the App itself. Thanks for that

From a Docker standpoint that is all silly & completely unnecessary & general bad build practices. But it explains why it’s so hard to get it into a separate Docker. Since it seems to require a specific version of Debian it should be built on that for the container, but since it already has a separate container, I assume that means it’s running that other container on a different base, so I’ll have to find equivalents or have it run as 2 containers, not sure which will be easier, but I’ll look into it. Thanks for all the info, from what I’d been able to get from others I didn’t have a solid roadmap, more like I just had a compass saying that the destination was West & that it was either a lot North as well or just a little North & somewhere between 100 to 100,000 miles in whichever direction. You didn’t give me an exact address, but I at least know the city now & the bus routes & freeways that are used, which makes it much easier.

As of now that’s how I have mine setup.
I have a custom cronjob runner that I’m most concerned about because it has a lot of access & it’s security is not great itself as what it’s made from is intended to be run when needed & shutdown when not being used to edit the system cronjobs. I have mine running as it’s own Cron Manager & Runner because WSL sucks at handling Cron. It’s built off of Crontab UI, but with python & Perl & a few other dependencies added that are needed for cronjobs that don’t have a docker container to run on or that I don’t want running on their docker container like my Zap2It TV Guide updater for xTeVe, a custom reader that reports & fetches certain private things that need my account to login to, those kinda things. Sometimes I need an update of those things pulled to my home server when I’m not there, so it’s nice since the cron manager has a Run Now button so I can have it run & it’s done. It has a username & password setup & I added a self-signed SSL to the container, but when it communicates with other Docker Containers (Which are not in the same Docker Network for security reasons) that traffic is sadly not encrypted so it can be snooped on. But the Cron Runner has access to every container, as well as locations on the Drive that contain sensitive things, & some of the system. Home Assistant has good SSL & encryption & I have a ridiculously complex password setup on it, so I feel secure enough if that has to be gotten through before getting to the Cron Manager.
I also have Dozzle for logs, which has more access than I feel comfortable exposing to the internet, but it has very poor security & all local traffic is HTTP, with NginX only giving an SSL when it gets outside. This one has access to pretty much all of Docker, but at least has RO access only for most things.
I also run, surprise, Portainer, which I’d prefer to access through this 2-layer method if possible, but is pretty secure on it’s own.
The other things like Duplocati, NginX Proxy Manager UI, xTeVe, DizqueTV, are all things it’d be nice to have there, but I wouldn’t put effort into doing so, as I’ve yet to have any reason to access them while away. The rest of my containers intentionally have access from outside through NginX & other services like DuckDNS or cloudflare, so they are fine.
I don’t actually run Frigate, it’s just what I was referred to by someone saying they used a modified version of the Frigate Add-on to do what I was wanting. It’s possible they were utilizing Ingress & just didn’t know that’s what they were doing, or just didn’t want to say so. Trying to get more information about how they modified it, what was different, etc proved to be useless as they didn’t seem to want to explain or help, just to say that it could be done with it.

@LostOnline you do not know what dbus / linux ipc is (or of less relevance but still noted, the iframe construct for pushing html), but you seem awful combative here, and quick to point out how “wrong” some solutions are … this is interesting?

I am chiming in because for reasons would actually prefer to run Supervisor as a container, exposing to it the docker sock directly (as well as /dev/ and dbus messages). To summarize for anyone else who comes searching for the same but doesn’t have a background with containerization - doing this is completely possible - easy, even for anything you could install on your local linux box - but since the community doesn’t support it you’ll have some fun configuring it the first time, and periodically will see config break - but most importantly the advantages ~evaporate as the images & config aren’t created/tested upstream (ie you’ll be rolling your own). [note this is all different than DinD, which i’ve seen lumped in on other HA threads]

Caveat I’m new to HA, but it appears Supervisor is a local/host program only (not running as a container - a lot of responses seem to imply that it runs as a container “somewhere” - unless setting that up baked into the .deb (which i have not checked, but would be a unusual). It’s playing the roll of “orchestrator”.

I’m hoping to install supervisor - and docker - into an LXD managed system container, but have some looking to do to see if that, too, will just be config hell. It would be nice, easier for me, personally, to manage in my environment than a full VM.

It is a container, see here listed in portainer

It is a container, and its role is as a Docker manager.

Then don’t use the Supervisor. The Supervisor expects to manage your stack. It’s designed to manage your stack.

Well that proves it! Thanks.

Is this set up from hooks run as part of the .deb install (because there is only the .deb install listed in the instructions I saw). From the perspective of writing saas apps on the backend of some company’s product this is odd IME (eg you’d configure deploy of Supervisor image by Compose or one of the many higher abstractions (terraform, etc). But it sounds very plausible for the HA team to “ship” this system to end consumers a .deb is more maintainable (??)

I’ve briefly searched again but still can’t find where this is documented - if anyone has a developer page or docs they could link that would be awesome.

thanks for your reply.

I now see systemctl restart hassos-supervisor to force a reload of :latest supervisor over on the dev instructions.

This is a pattern I haven’t seen used - IME :latest is typically strictly avoided in shipping a system to production (briefly, because it’s error prone due to docker’s willingness to default to it, and that it sounds like a dynamic entity but it’s not [it’s just another explicitly set tag], thus it closes several common error cases to just make that explicitly created tag something meaningful like a semver). Then an equivalent of restart: always policy is used to leverage the container runtime to ensure your orchestrator (Supervisor for HA) is always running and thus able to do it’s job.

It is a container, and its role is as a Docker manager.

Are there any docs, or explanation you could point me to for what the HA systemd task is doing other than tearing up/down Supervisor container? watchdogging the system (why not rely on healthcheck/liveness checks from your container framework)? I feel like I’m missing something about why we need to have systemd orchestrate-the-orchestrator :slight_smile: Perhaps that just pre-dated the work HA team did at some point to fully-dockerize the system?

Then don’t use the Supervisor. The Supervisor expects to manage your stack. It’s designed to manage your stack.

agreed, and I want to stick with that functionality (among reasons I likely don’t know about, it seems that to use (or rather keep updated without a bunch of manual work) community plugins I need Supervisor.

HA is clearly a large / complex / established system, so I undoubtedly have a large learning curve to assimilate :slight_smile: I’ve tried to leave most of my context out as this isn’t my thread, and it’s possible my ‘requirements’ could change as I understand HA more - but keeping things like my mqtt broker managed outside of HA, and fitting HA’s data into my existing backup schemes, etc, lead me to wanting to understand what HA really needs to orchestrate besides “everything”.

That’s the developer docs, not the user docs… and the developer docs for developing the Supervisor.

Depends on what you mean by “plugin” since that’s not a term that HA uses.

  • Add-ons are just software running in a container handled by the Supervisor
  • Custom integrations aren’t handled by the Supervisor

Don’t confuse HA (Core, the software) with the larger ecosystem.