Supervisor 249: Resolution center, Stability & Private container registries

The Supervisor is shipped with our Home Assistant Operating System and controls all hardware and software on your device. Bringing an effortless experience in managing, upgrading, and maintaining your Home Assistant instance via the user interface. It is what allows you to focus on what really matters; automating your home.

The Supervisor generally does not have enough changes that we can fill a blogpost with each release, it also does not have a fixed release schedule. A new Supervisor update is released when itā€™s needed, but that does not mean that there is nothing going on in this part of the ecosystem! One month ago, we put out the The Supervisor joins the party post that went over the highlights from the past year. Now itā€™s time for version 249, rolling out to everyone as we speak and itā€™s full of new goodies, enjoy!

Resolution center

We are introducing a new resolution center. This new feature will be used to identify issues and provide context and suggestions on solving your installation issues. We have big plans for this one!

New in this release is that if the available free space on your device drops below 1 GB, auto-updates will be paused and you will be notified. This should give you time to clean it up.

To help you free up some space, we have added a new page to our documentation to help guide you through that.

If you run Home Assistant Supervised, the system can get marked as unsupported if the host system does not have all the required services running. It was hard to find the reason why a system is marked as unsupported. It is written to the logs during startup, but the logs that you find in the UI do not go back that far. To make it more accessible, the message that your system is unsupported will now allow you to open a new dialog.

This dialog will list all the reasons why the system is marked as unsupported. Clicking on the links in that dialog will take you to a documentation page that describes the issue, and there will be a solution you can use to make your installation compliant with ADR-0014 and ADR-0015.

Stability

Stability is something that is always in focus for new Supervisor releases. The Supervisor is the central part of your Home Assistant OS/Supervised installation. Almost every release has some form of stability improvement, and this is not an exception to that.

If Home Assistant fails to start, we will now rebuild the container. This will ensure that a new fresh container is started up if something inside the container has been corrupted. We have also added this feature to our newly added observer plugin.

All auto-updates will be automatically paused if your system is running low on free space. Manual updates are still allowed, but now we will not fill up that space automatically, which in some rare cases could lead to startup issues.

Weā€™ve also added a new check when adding add-on repositories to make sure that itā€™s actually an add-on repository. This should help avoid confusion.

Landing page

Weā€™ve updated the landing page that you see when youā€™re installing Home Assistant. We added logs to it, so you can follow along with whatā€™s happening while you are waiting.

And there is more! Our mobile apps are now able to discover the landing page. This means that you can discover a new installation while itā€™s initializing and already start onboarding in the mobile app.

Private container registries

Thanks to @skateman, you can now use private (password protected) container registries in the Supervisor. This allows you to use add-ons and other containers that require you to log in to a container registry to install it.

You can find this in the store tab of the Supervisor panel, by clicking the three dots in the top right corner. This option is only available if advanced-mode is enabled.

Stricter Network Manager check

With this version more checks have been added to ensure that Network Manager is not only enabled on the system but that it also manages at least one network interface. If you are running our Home Assistant OS, this does not apply to you.

This stricter check allows us to provide a stable way for the Supervisor and for add-on developers to get the IP Address of the host.

The system will be marked as unsupported if Network Manager does not manage at least one network interface on your system. Weā€™ve added a new page in our documentation that will help you get it fixed.


This is a companion discussion topic for the original entry at https://www.home-assistant.io/blog/2020/10/21/supervisor-249/
1 Like

after updating to 249 i couldnā€™t access any addons page from the web, showing a generic error with only an ā€œOKā€.
access addons from local was ok.
I use Nginx Proxy Manager, so donā€™t know if itā€™s that but never had this issue, if it was not loading addons on web it also didnā€™t on local, so this is new.
had to roll-back to 247 ( with many troubles since it was forcing to spin up v.249 ) to be able to access addons from web.
Will stay with 247 for now.

Awesome! This is getting better and better!

Only thing i noticed was a high cpu speed since 05:00am (almost 100% in one of the cores), after a restart everything is perfect.

That sounds like browser cache, which is heavier if you use https, try clearing the cache of the web page.

That sounds like the recorder integration, have a look at filtering or limiting the days to keep.

well it was strange that, refresh with ctrl, reboot HA, reboot hassio-addons didnā€™t solve it.
only going back to 247 solved it.
so for now iā€™ll stay with 247, not a big deal :wink:

Just happened the same to me (ubuntu supervisor). After changing /etc/docker/key.json:

"log-driver":"journald","storage-driver":"overlay2"

I solved it with a whole ubuntu restart (and cache clearing).

Iā€™m not able to update

20-10-21 12:37:40 ERROR (SyncWorker_4) [supervisor.docker.interface] Can't install homeassistant/armhf-hassio-supervisor:249 -> 404 Client Error: Not Found ("manifest for homeassistant/armhf-hassio-supervisor:249 not found: manifest unknown: manifest unknown").

anyone facing the same issue?

You canā€™t choose what version of the supervisor you use.

You have the ability to roll it back, but it will automatically updates itself eventually. You cannot disable automatic updates, so even if you roll back itā€™s likely to just update itself back to the newest version in a short time. This is an intentional choice by the devs, who have stated many times that the supervisor will always update itself, when it wants to, with no plans to give the users control over these updates.

2 Likes

Try again now @deluxestyle

Has there been any talk of a feature to turn off all automatic updates? Would be incredibly useful for cases where HA is running on mobile data (like RV or boat) with tight data caps.

Right now the only work-around I know of is to modify the dns forwarding of the router to force HAā€™s update checker to fail.

2 Likes

If you want that level of control, you should look into running the standalone container version.

Do Not Change. /etc/docker/key.json:

It must be. /etc/docker/daemon.json:

open/create a file with nano

sudo nano /etc/docker/daemon.json

insert the following content then save (ctrl + s) and exit (ctrl + x)

{
ā€œlog-driverā€: ā€œjournaldā€,
ā€œstorage-driverā€: ā€œoverlay2ā€
}

restart docker

sudo systemctl restart docker

Perhaps you are out of disk space.

Thx, i will fix it (its working anyway).
I could swear that the docs said to edit key.json instead of daemon.jsonā€¦

And look here:
https://docs.docker.com/config/containers/logging/journald/

The file daemon.json doesnt exist, i suppose i will have to create it.

1 Like

Iā€™m running ā€˜unsupportedā€™ on Ubuntu 20.04.1 LTS currently. Supervisor auto-updated from 247 to 249 on me this am. Clicking the warning message showed a Docker error, and I was able to create the daemon.json file, rebooted my entire NUC and now that warning message is gone.

I still have:
image

but at least Docker error is goneā€¦

When I do the changes Docker is not starting anymore. I am getting this error.
Running systemctl status docker.service gives this error (RESOLVED):

Edit:
After some playing around I found out that it does not like the storage-driver setting. After doing a docker info I found out it was installed with aufs. When changed to aufs it started as usual again.

ā— docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-10-21 21:46:17 CEST; 14s ago
     Docs: https://docs.docker.com
  Process: 727 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
 Main PID: 727 (code=exited, status=1/FAILURE)

systemd[1]: Failed to start Docker Application Container Engine.
systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
systemd[1]: Stopped Docker Application Container Engine.
systemd[1]: docker.service: Start request repeated too quickly.
systemd[1]: docker.service: Failed with result 'exit-code'.
systemd[1]: Failed to start Docker Application Container Engine.
systemd[1]: docker.service: Start request repeated too quickly.
systemd[1]: docker.service: Failed with result 'exit-code'.
systemd[1]: Failed to start Docker Application Container Engine.

This is the latest version of Docker CE 19.03.13 on Ubuntu 20.04.1

Well, my instance apparently also auto-updated the Supervisor.

Currently on 249, and itā€™s now complaining about an unsupported installation, because Network Manager is apparently missing.

Thing is, this is a stock Supervised install running on Debian (on a pve host), never fiddled with any settings, so I have no idea why this might have happened.

The suggested remediation on the Network Manager page is re-running the install script. Is it safe to do it in a production system? Will I lose any of my add-ons or data, if I run the script again? And most importantly, since the system was actually installed by that same script, will the error get fixed?

Thanks for any help you guys can give!

You could just ignore it?

If you rerun the script it will be fine. It will just change a couple of docker settings and restart the service which will fix the error.

Thanks! Iā€™ll try that out later today.

Amazing response time, by the way! :smiley: