Home Assistant Supervised (previously known as Hass.io) on Synology DSM as native package (not supported or working atm)

Ok fixed this (I think) I removed the package then deleted all reference to hass.io from the docker directory. Reinstalled the package - no difference after I on boarded home assistant. Repeated it with a Nas reboot after the deletion and reinstalled the package. Reboots started again shortly after the on boarding processes.

Final fix remove the package and the files from the docker directory. Remove docker package reboot Nas.

Reinstall docker reinstall the package. On board devices and have been stable for 3 days.

Now to put the other add-ons and build my front end again.

Full repect to fredrike. It’s been great to have this option for my Nas and be able to get more out of it and I have learnt heaps about home assistant and docker because of it.

Hopefully this helps anyone else who has seen some strange behaviour with their system that it might not be the Hass package but docker.

Like the post above I’ll be looking to install a zwave stick soon so any advise would be great.

Thanks

Ezekiel

1 Like

I am having trouble getting my wyze usb dongle to load.

lsusb gives me the following:

|__usb1          1d6b:0002:0404 09  2.00  480MBit/s 0mA 1IF  (Linux 4.4.59+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub
|__1-3         1a86:e024:010e 00  1.10   12MBit/s 100mA 1IF  ( ffffffd1ffffffb2ffffffdbffffffad)

In my log I have this:

subprocess.CalledProcessError: Command '['ls', '-la', '/sys/class/hidraw']' returned non-zero exit status 1.

I am trying to follow these instructions:

Passing dongle hidraw device into Docker:

I created the “99-wyze.rules”, but there is no “docker-compose.yml”, correct?
Additionally, is my wyze usb dongle supposed to load as a “hidraw” subsystem?

I had the exact same issue and solution.

See here.

Rock solid since.

I’m glad you got it working.

Thanks, adding the hassio network manually before installing the package did the trick for me. For the new multicast container to work it’s important that com.docker.network.bridge.name=hassio is set which you can’t do via the Synology Docker GUI directly. I used:

docker network create -o com.docker.network.bridge.name=hassio --driver=bridge --subnet=172.30.32.0/23 --ip-range=172.30.32.0/24 --gateway=172.30.32.1 hassio
1 Like

Hi,
I was trying your solution by stopping restarting the Hass service then go and clear the supervisor container, however I do not have that container. If I watch the containers status page.It gets created thenruns for few seconds then it gos in a cycle. Create-run-delete …and so on. So it never gets me the supervisor page.
Any suggestions?
Thank you

A post was split to a new topic: Hass.io on Synology with Alexa

4 posts were split to a new topic: Port forwarding for HA on Synology

There seems to be something that is breaking it after less than 24 hrs. If left untouched the supervisor crashes and I have to restart the whole NAS. Then it comes right back. Last restart wiped off all the Google Cast devices I had setup. The Google cast disappeared from my install and can’t even find it in the available addons
Was it removed ? Anyone seen this ?

Thank you

I’ve lost all Google Cast devices after update to 0.109 - there is bug report I opened also.
Depending on the network setup you may not be able to get them back at this point.

I wasn’t able to use Integration for adding them previously and was using manual setup.

Docker install issue are solved after DSM 6.2.3-25423 update.

Thanks for support !!

1 Like

Received a new update to 0.109.2 that brought my Cast devices back However the supervisor still gives me grief. I just updated the DSM to 6.2.3.25423 Will see if that makes any difference

My supervisor container is still crashing and goes into a loop every day. I don’t have much insight in the logs as it gets deleted and recreated in matter of seconds.I have to restart the whole NAS in order to get it back

Is there a fix for this? I’m sorry if I missed the obvious but the thread is very long. The solution proposed at the very beginning to restart just the Hassio service doesn’t seem to help.
Found some logs

Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/components/synology_dsm/init.py”, line 160, in update
await self._hass.async_add_executor_job(self._dsm.update)
File “/usr/local/lib/python3.7/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/usr/local/lib/python3.7/site-packages/synology_dsm/synology_dsm.py”, line 261, in update
data = self.get(SynoCoreUtilization.API_KEY, “get”)
File “/usr/local/lib/python3.7/site-packages/synology_dsm/synology_dsm.py”, line 169, in get
return self._request(“GET”, api, method, params, **kwargs)
File “/usr/local/lib/python3.7/site-packages/synology_dsm/synology_dsm.py”, line 227, in _request
raise SynologyDSMAPIErrorException(api, response[“error”][“code”])
synology_dsm.exceptions.SynologyDSMAPIErrorException:
Code: 1052
Reason: Unknown
2020-05-03 10:59:33 ERROR (MainThread) [homeassistant.components.hassio.handler] Client error on /homeassistant/info request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed (‘172.30.32.2’, 80)]
2020-05-03 10:59:33 WARNING (MainThread) [homeassistant.components.hassio] Can’t read last version:
2020-05-03 11:12:19 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-05-03 11:14:06 ERROR (MainThread) [homeassistant.components.hassio.http] Client error on api app/entrypoint.js request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed (‘172.30.32.2’, 80)]

The errors that pertain to the HACS are not making the system to crash because The system crashed without them

Thank you

Is your Synology Firewall perhaps enabled?

Nope ! FW is disabled! The only thing that I “think” can maybe affect it is the PieHole But I whitelisted the hass.io.
Are there maybe any FW rules that I need to be aware of? That need to be done on the router

Hey folks,

I’ve just switched to hass.io on Synology (DS 216+) from regular docker installation on SBC and everything works fine for me except one thing.

Every 5 minute Supervisor tries to connect to HA and fails, this is what it logs:

20-05-05 10:25:57 ERROR (MainThread) [supervisor.auth] Can’t request auth on Home Assistant!

While in HA in logs I’ve got:
2020-05-05 10:25:57 WARNING (MainThread) [homeassistant.components.http.ban] Login attempt or request with invalid authentication from 172.30.32.2

Does anyone know why it fails and how to fix it? It’s fresh install…

I was getting the exact same error and have been for quite some time. Your post prompted me to try and fix it. Are you using Node Red? If yes this might help, it fixed it for me: https://community.home-assistant.io/t/node-red-and-172-30-32-1/142390/4?u=eoin

Ok I think I found an issue. Since I was moving some services as part of the migration from docker based stuff to hass.io I think my MQTT devices are causing HA and Supervisor to log this stuff in a weird manner.

https://community.home-assistant.io/t/new-install-login-attempt-or-request-with-invalid-authentication-from-172-30-32-2-supervisor-auth

Thans for your great job!
I believe that this is the best choice to run homeassistant on Synology!
My journey of homeassistant, for reference:

  • home-assistant (now called core) on Ubuntu (vmware on Windows 10 on a notebook)
  • home-assistant on RaspberryPi 3B+
  • home-assistant on docker of Synology
  • hassio on vmm of Synology
  • hassio on docker of Synology. the cpu load is less than vmm, about 10% vs 5%
1 Like

@fredrike does this affect hass.io package somehow? https://www.home-assistant.io/blog/2020/05/09/deprecating-home-assistant-supervised-on-generic-linux/

3 Likes

What is the best way to reboot this? The Server Controls menu doesn’t work as items that require a restart still request it. The only way I’ve been able to get this to work is by manually rebooting the docker containers.