How to run home assistant supervisor in a docker container

It’s not a semantic argument. You said the add-ons are containers running in a container which is 0% accurate. All of the containers, including the add-ons, are a single “layer” of containers that run alongside each other.

As far as your other point, why not just run HA OS on bare metal or in a VM (or HA Supervised, but that’s really not recommended)?

1 Like

I don’t know the particulars, as I don’t have that version, but I know that an add-on passes through 3 points before it gets to the outside world. The stuff running on Home Assistant passes through 2. So however it works they are not parallel. I didn’t run it myself as I have said, I don’t run it myself, so I cannot really tell you more.

I run a Windows server already. I have no desire to run a separate machine to run home assistant. As far as the VM, I think that’s a dumb argument as well, & I don’t want it to be completely separate, nor do I want to waste resources with a VM, I actually don’t run it in docker either, I found a “Portable” version that runs well as a stand-alone app in Windows since Running in WSL2 has problems with networks & working properly especially with detecting devices. My setup has a very low footprint, & works well with all my Docker Containers, something I’ve also heard running HA OS in a separate VM can cause problems with.

I also want to have more control over my containers than HA OS lets you have, & I’m already running Docker, with plenty of containers, & if Supervisor really sits next to the containers, similar to how Portainer works, & just manages the handling of the Add-On Containers & how they integrate into Home Assistant, there’s no reason why it shouldn’t be able to run as a regular Docker Container aside from an invented limitation to emphasize what someone thinks is a better option.

An iframe by definition is just a pointer to the browser, it sounds like you’re really looking for a proxy.

No, that’s not how it works. I’d love to see a link to where you got that information.

Yes, and no at the same time. A proxy in the traditional sense redirects something, so that would require the thing it being redirected to to be accessible. That’s what I need to avoid. if I setup NginX to tell anything that hits it looking for myface.duckdns.org to go to 192.168.1.200 then anything outside coming in has access to it. I can’t have that. Apparently the Frigate Proxy Add-on does what I want, supposedly, & it does utilize NginX to do that, but a little research told me that by doing that it prevents that NginX from doing what it is normally intended to do.

What I need is for it to redirect, or proxy, to an address that if I was in the local environment would be correct, but is not accessible outside of the Local Network

Addons like the frigate addon use ingress UI. What that usually means is the UI of the service is accessible to logged in HA users as part of the HA UI.

Im going to borrow a screenshot since I was just talking to someone about this addon in another thread. It looks like this:

You can see how there’s a frigate nav option in HA and it opens within HA. You’d also notice if you tried it that you were never prompted for authentication when you opened frigate (or you shouldn’t have been, I don’t use frigate so I can’t confirm it does this part correctly). That’s because HA addons using ingress are supposed to be accessible to any logged in HA user. They don’t have their own separate auth since the HA user has already been authenticated so theres no need.

However I should note that HA doesn’t care whether your on LAN or not. If you are logged in to HA then you can get to frigate via the side nav, internal or external url. If that’s not what you want then addons won’t work for you, that’s not customizable.

When you have a supervised or HAOS install it makes a docker network called hassio. Home Assistant (or core in this context), supervisor and each addon all run as containers in this one network (except addons that use host network obviously). Not really sure where your points are coming from but that’s how it works. All individual containers in one docker network.

This comparison to portainer is bad. That’s just one part of what supervisor does.

Supervisor is intended to be used in an appliance install. In addition to installing new containers it also expects a particular file structure on the host system make into volumes, particular services to be installed it can interface with over dbus to do things like add and remove apparmor profiles, change network settings, start, stop and update host services, shutdown and reboot the machine, listen for new hardware and inform other containers about them, etc. This in addition to the stuff you’d expect portainer to do (start, stop and remove containers, create/remove networks, create, pull and remove images, etc).

It’s not an independent container at all. If you try and run it in any random docker setup it’s going to fail. It is deeply dependent on the host being configured and running in a particular way.

2 Likes

Have you considered a VPN for remote access instead?

1 Like

Thank you very much for your reply. It’s the 1st one to actually provide any real information.

Yes, that’s exactly what I DO want. That’s the problem with iFrames, they do not do that. I 1st thought iFrames would, but then I learned they essentially are useless since they rely entirely on your browser to do everything, they just put it in a frame. They also go back to the start whenever the tab is switched or refreshed, which is even more useless.

See, that’s where the conversations get really annoying. Home Assistant people will say “You don’t need Supervisor if you are in Docker because everything the add-ons do can be done in a Standalone Container” while at the same time saying that Supervisor does more than just managing containers. I point out the UI integration aspect that the containers definitely do NOT do & everyone just ignores that & changes the topic for a bit then comes back saying the same thing again.
I know It does stuff that Portainer doesn’t & Portainer does stuff that it doesn’t, but Portainer is a Docker Manager & UI. Supervisor is a Docker Manager & UI, but it does things differently, has different requirements for the Dockerfile, etc. That’s exactly why it’s needed alongside Portainer, because they aren’t the same.

I mean Portainer does that too. It’s used to install docker containers. It just does so using the Docker arguments while supervisor has it’s own arguments

This can easily be accomplished with Bind Mounts

I’m not sure exactly what that is, but if all the containers are in the same stack I think that should be able to be handled, all the things it accesses are Part of Supervisor’s container or Home Assistant’s, so on the same Docker-network, which happens on a stack by default, that isn’t a problem. If it needs something else that could be another stack if nothing else, so this is definitely possible, even if I don’t know what exactly is being referred to.

These are all things that can be done with Portainer, & can easily be possible within a properly structured Docker-Compose. As for updates Watchtower does a good job of doing updates, so there’s no reason Supervisor couldn’t handle those tasks for it’s stacks as well.

Now this would be something that couldn’t be done, but wouldn’t need to be either. I guess technically it could be, but there’s no reason for it, Portainer & Watchtower both have the ability to start & stop containers, & as far as the containers are concerned that IS a restart of the computer.

If you mean detect hardware on the network, the same way Home Assistant does, then that’s definitely something that it can do, as for informing other containers if they are in the same stack that’s as easy as it possibly could be, all ports are open within a stack so they can talk as much as they need to.

I mean the main thing is that it will fail in the Docker Build stage because of the stupid BUILD_FROM argument all Add-ons seem to use. As far as the system needing to be configued in a certain way, that’s pretty much all Docker Containers. That’s why you set those arguments in ENV variables & Bind Mounts.

I actually have a VPN setup. But it is more likely to fail, & if it’s down I have to be there physically to fix it. I actually have 2, 1 set in my router as well as 1 set in a container, but each also have their limitations. the biggest being that they take over my internet, so 1 I have no internet while it’s being used, & the other I do, but that internet is going into my home, then to me through the VPN. With the 2 setup I rarely have it so I can’t get in, but it’s very inconvenient, & the whole point of Home Assistant, of Smart Homes in general, it to make things more convenient

:roll_eyes: Really?

It has dependencies outside of docker entirely. The supervisor container requires access to dbus on the host. Which means it can communicate directly with services running on the host by sending them commands and receiving messages. It accesses and depends on things which are not in any container at all.

1 Like

I will note that this is fair. I’ve actually stopped saying this without a caveat for the UI bit because of that. I’m personally not a big fan of ingress. I mean it’s a very cool feature and it works well for people but I don’t like the embedded UI and I would prefer the UI of addons only be accessible within my LAN since all are admin only anyway.

That being said the ingress UI is the only thing quite difficult to replicate in a docker setup. It can be done but it would take a lot of effort. You’d basically need to build a kind of supervisor light in between to check that the HA user was authenticated and then proxy the request to the application. And set up external/proxy auth within that application which isn’t always easy.

All other aspects of addons are fairly easy to replicate in a docker setup. Yes it is nice that addons configure integrations for you but configuring them yourself is always possible. And most integrations have pretty good doc. Backup is nice but also users heavily invested in personal docker setups generally have backup mechanisms in place. And besides those aspects they really are just docker containers.

1 Like

That was about the Frigate Proxy thing, not about the how it’s different from Portainer. I was making that argument to explain for those who know Docker in a way that would make sense. Because that idea was attempted in this thread, but never in a way that was clear from that mindset.

In the Discord I was asking about how the Add-ons handle ENV variables, something completely related to Add-ons & Add-on development, but I mentioned that the reason I was asking is because I was trying to get the Frigate Proxy to run as a standalone container & I was met with

We develop add-ons: if you want to make a normal docker image instead, you should ask on the docker forums or another discord server.

Every Addon seems to have a BUILD_FROM or a BUILD_ARCH argument that it calls from the system in the Dockerfile, but the system doesn’t have that argument. obviously when Run in supervisor it gets it from somewhere, but nobody would tell me where because I mentioned Docker outside Portainer. eventually someone pointed me to The Add-On Developer Docs but I found the tutorial references the same thing, but doesn’t explain how it’s used or where it comes from.
In other forums people keep either telling me why I’m stupid for wanting to do it outside of Supervisor, telling me that I can “Just use a Regular Docker Container”

there is no reason too
you can just use normal docker images already out there
there is nothing those can’t do already

but when I ask where I can find one that does that they just say “Dockerhub” like a Pretentious AssHat
If you say so confidently & authoritatively that it exists you must have seen one, but you won’t say where or even give a hint, just say it’s one of the houses somewhere in America…

But I can’t find any that do what I need. I’ve looked quite extensively, otherwise I’d be using them, which I’m much more familiar with, instead of trying to make an Add-on function. The reason I came looking for help was because I can’t find any other option.

What are you looking for?

Is it just something to replicate the ingress function, or are you looking for something else?

lol. That’s entirely the reason I like it. If I’m at home I don’t need it. So I don’t understand what the point of it is otherwise. Well I guess the authorization, but I personally don’t mind having to authenticate myself, I just don’t want it to be on an exposed port or DDNS.

Exactly.

Which is what brought me to this Forum, I essentially made that same conclusion.
Would it be easier if the User wasn’t authenticated & had to manually sign in? I’m all for that. That would actually be even better because it would require 2 levels of authentication for something that could potentially damage the system, 1 from Home Assistant, one from the low security one on the app.

External proxy?.. That’s what I’m trying to avoid. The whole point was to make it so that there is no external access except from through Home Assistant

Yeah, all those fuctions I don’t want Home Assistant handling anyway

I think so. I’m not sure what exactly the ingress function is, but that sounds right. from my understanding it’s a proxy that displays a page inside Home Assistant regardless of whether that page can be seen outside of it. That’s what I’m looking for, & what the addons that cannot be replicated with simple Docker Containers do NOT do

Yes - it’s a proxy service built into the Supervisor.

You’ve got three choices:

  1. Switch to using HAOS
  2. Use a VPN for remote access (you said you had reliability problems - that’s down to how you implemented it)
  3. Implement your own version of Ingress (the code is public after all)
1 Like

An explanation of each of those args is found here. They come from the builder. Every repo in the HA ecosystem that makes docker images uses that GitHub action to make them. It reads in the build.yaml file in the repo and makes an image per supported arch accordingly. Doc for the build.yaml file can be found here. Here’s the list of supported arches from that same doc:

A list of supported architectures: armhf, armv7, aarch64, amd64, i386.

Basically figure out the arch you’re running on. Set that value to BUILD_ARCH. For BUILD_FROM look at the build.yaml and copy the value out of its build_from field that matches the arch. If build_from is unspecified, it defaults to the image from here that matches the arch. There are also other things that can be in build.yaml you’ll need to handle yourself.

For addons you also need to look at it’s config.yaml or config.json as well. Supervisor reads this file and uses it to construct the docker run command. Doc for everything that can be in this file is here.

Supervisor’s build.yaml is here. But setting the args from this isn’t close to enough. The supervised installer is what you would have to look at. Literally everything being done in there is setting up required dependencies for supervisor. None of that is recommendations or suggestions, it’s all supervisor’s required dependencies. This is what I’m trying to tell you, supervisor is not a normal docker container. If you copied and pasted some run command it wouldnt work on 99% of deployments. Supervisor is deeply tied into the host and has dependencies outside of docker entirely that it will fail without.

1 Like

I guess I don’t know what addons you plan to use and how (other then frigate obviously). Like I’ll run down mine, maybe that will explain my logic. Of the ones I use with a UI (some have no UI at all), these provide a UI that I would only ever want an admin (i.e. me) to see:

  1. Adguard Home (DNS server)
  2. Grafana* (analysis and visualization of HA data)
  3. Home Assistant Google Drive Backup (backup scheduling/management)
  4. Node RED (alternative automation platform)
  5. phpMyAdmin (raw view into HA DB, use occasionally for debugging tough issues)
  6. SSH & Web Terminal (HA management/cli)
  7. Studio Code Server (HA config in IDE)
  8. Zigbee2MQTT (Manage zigbee devices)

I strongly prefer admin-level tasks can only be done from within my LAN. So even though I use HAOS and ingress I actually have my cloudflare firewall set to block access to the ingress endpoint for these. Perhaps I’m being over cautious but at least two of these come with a terminal that can do anything so I’d just prefer using them requires I be on my LAN first, either physically in my house or via VPN.

The others I use with a UI are these:

  1. Hedgedoc (doc and notes tool)
  2. Sharry (file sharing)
  3. Vaultwarden (password manager)

These are not admin only. People other then me have accounts they log in to see/manage their passwords, notes and files. Therefore none of these have ingress options because ingress doesn’t really work with user level applications. When an addon uses ingress everyone looking at it sees the same admin-level user account. Which user is logged in on HA and what role they have is not translated to the ingress application. If these 3 used ingress then everyone would see and manage the same passwords, notes and files, not really a workable solution. So these all are managed like standalone applications and accessed outside of HA (port forward, reverse proxy, cloudflare firewall in front blocking off external access to admin features).

Btw I put a star next to Grafana because that’s kind of the one odd one. If the logged in HA user and their role was passed to the ingress application then that’s the only one I would probably treat differently. Potentially I (as the admin) would make some dashboards and things and then other users in my house would just see those as view-only when they clicked Grafana. But since currently anyone that clicks Grafana gets the same full and complex admin view it stays hidden from everyone but me.

I should note that you will probably have the same problem with Frigate. If using the addon then most likely everyone in your house that can click the Frigate option in the side nav will see the same admin view. Probably with full permission to make settings changes, delete videos, turn off cameras, etc. If you want view-only access for everyone but yourself then ingress won’t work for you anyway.

The VPN reliability issue is due to the home being on a triple NAT, implementation cannot really fix it unfortunately. But I manage it alright with the 2 together. It’s more about the inconvenience of it. I’m able to do it now, but I want to be able to do it more conveniently.

Implementing a version of Ingress is definitely an option, though I still think figuring out how to convert Supervisor to a container is a more appealing & potentially easier & more useful solution for me to attempt. But I’ll look into both.

That’s exactly what I was looking for. Thank you. I still don’t know what Supervisor does to determine which of those ARCH or FROM argument to use, but at least I know where those Variable strings are coming from, which is a big help.

I already knew about those for ENVs I just didn’t know it was handled in supervisor rather than the App itself. Thanks for that

From a Docker standpoint that is all silly & completely unnecessary & general bad build practices. But it explains why it’s so hard to get it into a separate Docker. Since it seems to require a specific version of Debian it should be built on that for the container, but since it already has a separate container, I assume that means it’s running that other container on a different base, so I’ll have to find equivalents or have it run as 2 containers, not sure which will be easier, but I’ll look into it. Thanks for all the info, from what I’d been able to get from others I didn’t have a solid roadmap, more like I just had a compass saying that the destination was West & that it was either a lot North as well or just a little North & somewhere between 100 to 100,000 miles in whichever direction. You didn’t give me an exact address, but I at least know the city now & the bus routes & freeways that are used, which makes it much easier.

As of now that’s how I have mine setup.
I have a custom cronjob runner that I’m most concerned about because it has a lot of access & it’s security is not great itself as what it’s made from is intended to be run when needed & shutdown when not being used to edit the system cronjobs. I have mine running as it’s own Cron Manager & Runner because WSL sucks at handling Cron. It’s built off of Crontab UI, but with python & Perl & a few other dependencies added that are needed for cronjobs that don’t have a docker container to run on or that I don’t want running on their docker container like my Zap2It TV Guide updater for xTeVe, a custom reader that reports & fetches certain private things that need my account to login to, those kinda things. Sometimes I need an update of those things pulled to my home server when I’m not there, so it’s nice since the cron manager has a Run Now button so I can have it run & it’s done. It has a username & password setup & I added a self-signed SSL to the container, but when it communicates with other Docker Containers (Which are not in the same Docker Network for security reasons) that traffic is sadly not encrypted so it can be snooped on. But the Cron Runner has access to every container, as well as locations on the Drive that contain sensitive things, & some of the system. Home Assistant has good SSL & encryption & I have a ridiculously complex password setup on it, so I feel secure enough if that has to be gotten through before getting to the Cron Manager.
I also have Dozzle for logs, which has more access than I feel comfortable exposing to the internet, but it has very poor security & all local traffic is HTTP, with NginX only giving an SSL when it gets outside. This one has access to pretty much all of Docker, but at least has RO access only for most things.
I also run, surprise, Portainer, which I’d prefer to access through this 2-layer method if possible, but is pretty secure on it’s own.
The other things like Duplocati, NginX Proxy Manager UI, xTeVe, DizqueTV, are all things it’d be nice to have there, but I wouldn’t put effort into doing so, as I’ve yet to have any reason to access them while away. The rest of my containers intentionally have access from outside through NginX & other services like DuckDNS or cloudflare, so they are fine.
I don’t actually run Frigate, it’s just what I was referred to by someone saying they used a modified version of the Frigate Add-on to do what I was wanting. It’s possible they were utilizing Ingress & just didn’t know that’s what they were doing, or just didn’t want to say so. Trying to get more information about how they modified it, what was different, etc proved to be useless as they didn’t seem to want to explain or help, just to say that it could be done with it.

@LostOnline you do not know what dbus / linux ipc is (or of less relevance but still noted, the iframe construct for pushing html), but you seem awful combative here, and quick to point out how “wrong” some solutions are … this is interesting?

I am chiming in because for reasons would actually prefer to run Supervisor as a container, exposing to it the docker sock directly (as well as /dev/ and dbus messages). To summarize for anyone else who comes searching for the same but doesn’t have a background with containerization - doing this is completely possible - easy, even for anything you could install on your local linux box - but since the community doesn’t support it you’ll have some fun configuring it the first time, and periodically will see config break - but most importantly the advantages ~evaporate as the images & config aren’t created/tested upstream (ie you’ll be rolling your own). [note this is all different than DinD, which i’ve seen lumped in on other HA threads]

Caveat I’m new to HA, but it appears Supervisor is a local/host program only (not running as a container - a lot of responses seem to imply that it runs as a container “somewhere” - unless setting that up baked into the .deb (which i have not checked, but would be a unusual). It’s playing the roll of “orchestrator”.

I’m hoping to install supervisor - and docker - into an LXD managed system container, but have some looking to do to see if that, too, will just be config hell. It would be nice, easier for me, personally, to manage in my environment than a full VM.