Help me move from supervised to core/docker containers

The supervisor manages all the containers and communication and configuration settings between them in a supervised install.

In a non supervised install, you specify the ports to open in the docker compose and they communicate through various ways, generally websockets but also mqtt.

Here’s an example of how you would configure node red on a nonsupervised docker install

I’m pretty sure of that. that’s the difference between an add-on and a regular docker image. the add-on is specifically built to be connected into the HA eco-system.

Not really sure what you mean here.

usually for regular containers if it’s related to HA (mariadb, appdaemon, etc)you just connect to it with some integration

What is the advantage to this?

I have to disagree on this point. I run Proxmox on an Intel NUC with multiple VMs and some of these VMs (e.g. HA) run docker inside. Running only HA on the NUC would be a waste of resources and with VMs I can easily spin up a new VM, test some stuff on it without any risk of breaking my production HA environment. I also have automated daily backups of some VMs to a NAS and if something breaks, I’m back up and running where I left off (no need to reinstall HA or anything) in a few minutes max. With Proxmox that’s easy as pie.
Also I never had any issue at all with USB passthrough (although I’m running the sticks now on a separate Pi, because the NUC is in a place far away from any devices), passthrough the USB to the VM in Proxmox and that’s it.

1 Like

I use Home Assistant Container as well on a NUC in a VM from Proxmox. I wrote guides for most of my stuff on my repo, maybe you can find some inspiration/help there.

I can see advantages to proxmax, but why wouldn’t you just install docker on the host OS alongside proxmox like mentioned here?

Im not all that familiar with proxmox, just wondering whats the advantage to run docker within a proxmox VM vs alongside it?

Because I want to isolate my VMs from each other to not interfere and risk having my production HA stop working when I mess something up on the host. Been doing this for 5 years now and my production HA was never offline (apart from moving to a new house and planned updates) and I messed up quite some things with my test VMs.

Also as I said, it’s way easier to make backups.

For your backups, you can use duplicati container, you can even upoad the backups to google drive

duplicati/duplicati - Docker Image | Docker Hub

Been using it for a couple of years and works fine

People seem to have strong opinions on proxmox and its been debated a lot on this thread

I saw your posts over there and read your blog about your setup and it definitely makes sense for what you’re doing. I have way fewer devices though and just docker over Ubuntu with the container install has worked fine for me .

It’s good there are so many install options depending on how you want to use Home Assistant, and what works best will depend on a lot of things (skill involved, type of hardware it’s running on, number/type of devices etc)

2 Likes

Yes, and in the end of the day, everyone has to find what works best for them.

2 Likes

Or they come here when it doesn’t work the way they imagined it :slight_smile: :slight_smile: :slight_smile:

Been thinking about migrating too but this just makes me realise how much work that will be…

Yeah, you are probably running quite some add-ons currently :sweat_smile:

1 Like

Currently I’m running HA Supervised where I can easily see if there is an update to the supervisor addons I use. When I move to Core and install the addin containers myself, how do I know if one of the addon containers has been updated…similarly, is there a way to look at the supervisor addin “store” just to review what’s there that I might want to add to a new container?

You have to keep track of, manage and update the docker containers yourself, either at the command line, through portainer, or with an update management program like watchtower. You can check the dockerhub to see if there is an update available.

There is not, there will be no supervisor screen or addon store. I have a supervised home assistant install in a windows vm that I occasionally check to see what’s there. You can also monitor the addon github pages.

Official addons

Community addons

Usually the official docker version of a program is updated before the addon version anyway.

I use what’s up docker to check for updates of my containers.

And then I have an automation that notifies my about updates → smart-home-setup/system_monitoring.yaml at 896d1f09bfd7059681f7b0b0f1935159dd12b512 · Burningstone91/smart-home-setup · GitHub

Personally I don’t use the latest tag but use version tags instead as I like to read first what has been changed before I update the container. As sometimes you need to adjust the config (e.g. MQTT when it was update to version 2.x).

The other points have been covered, there’s no add-on store, you can install every docker container you want. You can take a look at the addons and then just use the underlying docker cobtainer that they use.

Anyone tried to move from supervised to core/container? I’m already created the docker compose stack for my required containers.

Just wondering will it work if I just point my home Asistant container to the config directory of the current supervised installation coding directory. Anyone has experience on this?

I’ve never done it but I don’t see why it wouldn’t work.

but be aware that there are a LOT of things in the Supervised folder that won’t be needed in your docker install.

No addon support so anything that is addon you will need to install in seperate docker container or replace service and readd integration

Thanks for the input, am aware that those addons won’t work, my setup is super messy over years of tinkering.

Decided to install a separate instance and slowly move over the devices.