Home Assistant Supervised (previously known as Hass.io) on Synology DSM as native package (not supported or working atm)

It also worked fine for me until I restarted an addon (DuckDNS by update). Have you tried restarting it? On the other hand, have you installed the addons from HA or by manually deploying docker?

Thanks!

it’s working! Many thanks, simple downgrade with docker > Hassio CLI > terminal

THANKS!

1 Like

thx, very good “hack” -), is possible to disable the automatic update somewhere ?

I think I found a solution for that. ssh into your Synology.
Run the follow command to find the id for the supervisor container: docker container ls
The go into that folder: docker exec -it *id* bash
Then edit this file with: vi /usr/src/supervisor/supervisor/misc/tasks.py

Delete the lines:
RUN_UPDATE _SUPERVISOR = 29100
self.sys.scheduler.register .....etc...... supervisor

Then reboot the container.

5 Likes

How have you done to reinstall the addons? I just did that with DuckDNS but when starting it tells me that the configuration is missing. Have you stopped the docker from DSM first and then reinstalled? Or as?

you make nothing with docker
1- you must have "last good snapshot "
2- then install addon Terminal & SSH (not SSH & Web Terminal)
3- from Terminal & SSH execute:
ha supervisor update --version 2021.01.7
4- then you restore only all addons from “last good snapshot”

  • when you not have “last good snapshot” have to make new config

everything should be OK … until:

  • the next supervisor update (based on info from “rdekruyf” happens every night)
  • and subsequent restart of the addon (is depend from you)
4 Likes

Edit: I’m sorry if this is duplicative of @blavak68 post above, but knowing my frustration with this issue I thought it might help someone

I’m back up to full functionality. What worked for me was:

Downgrading to Supervisor 2021.01.7 as described by @rdekruyf
Restoring a snapshot from 5 days ago (2021-02-05, I cannot recommend Hassio Google Drive Backup enough)
Upgrading to Supervisor ver. 2021.02.1 (I had migrated to Zwave JS and needed a 2021.02 version)

I hope this proves useful for those of you also having trouble.

1 Like

does the addons restart work?or if upgrading to the new version and don`t reboot the addons?

@anschein I am able to restart without the new version of Supervisor being installed, I just have the notification on the Dashboard tab of Supervisor that an update is available.

Edit: @anschein upon further review I don’t think I understand your question

i mean in the supervisor ver.2021.02.1,can you reboot the addons?i have some connection problem with my MQTT broker and need a new version can reboot the addon :frowning:

@anschein yes, my addons are restarted and functioning normally

Thx,and i tried every version SUP i can and found ver 2021.01.8 is work for me and safety reboot the addons…But i didn`t upgrade to the new version of supervisor.
So can i ask which addons reboot fine in 2021.02.1?

like many others I have problems with the new supervisor. The problem did not occur until I restarted my synology.
I noticed that the new supervisor released in February bind the /dev:/dev path into the new containers (only in new ones or in those that the supervisor decides to recreate)

This thing creates problems for me with these addons:
-mosquito
-grafana
-esphome

I tried to delete the bind (dev: dev) but if I modify the container from docker / portainer the supervisor no longer recognizes the addon (the addon starts in this case)

@rdekruyf
Thank you very much for the time you’ve spent to come up with this solution. I first removed those lines from the config file and then repeated downgrading the supervisor. It worked somehow. I needed to ‘reinstall’ all of my addons, but this was quickly done. I assume, it just set some paths and did not have to download anything.
After a couple of restarts and restoring snapshots, it actually worked. All of my devices are gone and I need to repair them all again, but at least this is possible now.

I’m absolutely disappointed from the SnapShot-feature. After the first two or three times restoring an older version, the system still showed default values in e.g. the z2m config files. I also tried different snapshots from different weeks, but just shortly before I wanted to give up, my old configuration was restored. I don’t know why, but this feature looks more promising than it actually is.

To avoid that in the future, I will switch to the VM version of HA. Unfortunately, my Synologys are all running on ext4 devices, so I first need to get at least one drive emptied and then changed to btrfs. Let’s see how this works. At least, in this case the whole VM might be taken as snapshot and reverting should always be possible.

Anyways, always updating in fear that something will break ‘the evil and unsupported system’ is ruining the fun.

Thanks again.

I have also decided to start an installation of the VM version of HA (I already have a hdd in btrfs) but for now it is on standby again because I have gone from having a 15/20% CPU usage to 75/80% just by having that VM up.

There must be some other way … I don’t know if someone with more experience in docker can advise us on whether it is possible to install debian on a docker and install HA from there. Synology’s Virtual Machine Manager is rather inefficient.

Ok, manual updating docker seems to fix the problem. You can try this method -> https://github.com/markdumay/synology-docker

But be warned that this it interfering with the stability of Synology. Anyway, it works for me

1 Like

I can confirm this is working from updating supervisor.

is compatible with the 416play docker? (apk from 918+, docker for 416play isn’t supported)

I do not know the answer to this question. It seems to me that the application itself stays. Only some parts/files are replaced. So it seems to me that it should be ok …

Ps. There is always backup (I used it once to test if I was able to return to the original docker and it worked)

Unfortunately, my deleted lines where gone today and I’m back having the same error as before.

RUN_UPDATE _SUPERVISOR = 29100
self.sys.scheduler.register .....etc...... supervisor

I don’t know, what happend, but yesterday I’ve deleted those lines and double checked afterwards. They were gone and the file properly saved. Now they are back. Yesterday I restarted the supervisor a couple of times, but could this be responsible for those returning lines?

Either way, I have to go through it all again … and fearing that tomorrow it will break again? Oh my…