Install for Dummies Tutorial Request for: Synology 6.2, Docker, HA, NorTek Zwave/Zigbee stick

I’m asking for some help, an “Install for Dummies” kind of help. I work in the Windows world but I have a Synology DS918+ DSM 6.2 NAS and I’d like to setup HA on it to control my Zwave devices.

I bought a NorTek Zwave Zigbee USB stick.

I installed the Docker package (clueless about this and how to use it) and I installed the HA container in Docker following the instructions. I also installed the ZwaveJS2Mqtt container in Docker but I don’t think I did this right.

But that’s where it ends for me. I don’t know what to do for configuring everything or getting USB drivers on the Synology for the NorTek stick or getting the NorTek to talk with the containers or the containers to talk with each other. I’m totally absolutely clueless.

I need some kind of specific Install for Dummies step-by-step tutorial for my configuration, speaking to me like I’m a dummy because I am when it comes to Linux and I work solely in the Windows world. If you can help walk me through this, please, speak as you might to a young child, or a golden retriever.

Thanks

Hi Scott,

I tried to get Home Assistant working with Synology last year. I did get it up and running, yet got stuck at certain points. The setup is not user friendly to maintain.

Lesson learned 1

The project is focused the recommended installation method Home Assistant Operating System. Everything else is not well supported.

I finally bought a Raspberry Pi to have Home Assistant stand-alone. Dealing with the software got more easy. The Raspberry Pi 4 brought hardware issues instead. The sound card did not work well. I had to add a USB sound card. The Z-Wave stick was not connected. I had to put a USB-Hub in between. Connecting Amazon Alexa requires a paid account … and so on. It’s not mere fun.

Lesson learned 2

It ain’t easy and you have to hack your path alone.

Most of my questions in this forum don’t even get a single answer. Forget “Install for Dummies” on Synology. If you don’t find it on youtube or a as blog, you wont get it.

Conclusion

If you don’t have a lot of Linux experience avoid that the docker way. Take the most easy path.

First think if you need the flexibility of Home Assistant at all. You pay with your time. There are more easy commercial systems, but they are more limited. If you want full flexibility go with Home Assistant.

If you want it, think of “Home Assistant Yellow” as hardware. Consider the yearly costs of 75 EUR/ 65 Dollar for the cloud service. That is the most easy way to go.

Despite all those home automation hardware offered, home automation is still far away from being easy. Standards have not really evolved.

On the other end it is ridiculous how poor the automation is on Amazon Echo for example. You pay a lot of money for the Echo and you get zigBee built in. Then you are really treated as a dummy. You just can’t do a lot. There aren’t even conditions, which are the fundamentals of home automation. And that’s a reason for me to decide for the flexibility of Home Assistant.

Sorry for giving a little disenchant reply

Elmar

Hi Elmar,

I started coming to the same conclusions as you did through your experience. Last night in frustration – from the USB stick not getting recognized, to having to find drivers, to having to learn how to get drivers in Linux, etc – the rabbithole got too deep than I was willing to go. I used to spend endless hours delving into IT rabbitholes when I was younger, now I just want it to work or I’m willing to throw money at it.

I started looking at the RPi4 route, but found out that the RPi4 is in short supply – which led me to another rabbithole of RPi4 locator apps and websites. But then I see I could get a RPi4 kit maybe, but then it was what size RAM configuration to get to support HA, and what about cooling options. Only the most expensive RPi4 kits are available and after hitting F5 refresh a gazillion times on RPi4 websites hoping a 4GB board would magically show up as in-stock and if so then I would get a RetroFlag NesPi case and SSD drive, I said to myself this is getting ridiculous.

I’m just trying to turn on my lights and thermostat at certain times of the day.

… which I’m currently doing with my old Vera Lite Z-Wave controller which is a pig and slow as heck and the firmware has bugs and I can’t change certain settings anymore.

But what started me down this road of HA, Synology, Docker, RPi4, and NESPi cases was that I didn’t want my home automation to be on or relayed through anyone’s cloud server, and if I bought a commercial Z-Wave controller then it’s probably going to have some kind of cloud dependence.

Maybe I’ll hit the F5 refresh on a few more RPi4 vendor sites some more.

Thanks a lot for your reply. If anything it might save me from wasting time.

Scott

Few years ago 2GB was huge and I don’t plan to cut films on the RPI4. So I thought it fine to go with it.

I started with 64KB on my first machine and that was already the information size of a whole book. Pacman, great. Much more creativity than today. 2GB is over 30.000 times more since then. So I don’t bother if it’s 2GB or 4GB, such a tiny factor of two. After all we just want to switch on and off in home automation, switching one bit out of 2.000.000.000 bits.

I already depend on a cloud with my echo devices. I like the Idea of having unlimited access to information, music, audible books, and so on. That once was futuristic, today it’s reality and I enjoy it.

What I don’t like, I need a cloud to ship in-house information from Home Assistant to the Echo device. I pay Amazon and they still limit my freedom.

But yes, if you want to avoid the cloud at all, Home Assistant does work for you. In this case, there is also no need to pay an account of Nabu Casa. The main use of that account is to connect to Amazon or Google devices.

Or to support developers of Home Assistant, ESPHome and similar that get salary from Nabu Casa :slight_smile:
But also to allow you a remote access without the need to setup port forwarding, SSL certificate etc. Google and Amazon support is on top of all of that.

I run HA container with a ZWaveJS2MQTT container and Aeotec Gen 5 ZWave stick all on a Synology DS1515+ running DSM 6.2.4-25556 Update 6. Yes it was a huge learning curve, but I have a system that works. You do not need an RPi4, but for those who want a very simple “click, click, done” install, HAOS and an RPi4 or similar device is the route to go.

First thing I can recommend with your Synology if not done already is enable SSH in the control panel. Second thing I recommend is DO NOT update to DSM 7.0+. If you update to DSM 7.0+ then that’s where you need to go on the hunt for USB drivers for your device to make them work with DSM to pass through to the docker container. The alternative is to install a Virtual Machine of HAOS through the DSM Package Virtual Machine Manager. But that is a topic covered in another thread in the forums.

Enabling SSH on your Synology will help with advanced configuration of the docker containers. You will want to learn how to build your own docker create and docker run commands or learn docker compose where you put your container build configuration into a file then use ssh to take down and rebuild a container (especially useful when updating). I use just regular create and run commands. I do not use docker compose. I keep a text file with my commands so I don’t have to remember them or keep building them every time I want to update a container.

You will need to know what port the USB stick shows up as on your NAS. it could be either /dev/ttyACM0 or /dev/ttyUSB0 and the number could change every time you unplug the device and plug it back in. You also need to make sure the container has R/W privileges by either using the docker -privileged flag (warning: this lets the container have access to the entire DSM subsystem including but not limited to the drives themselves) or doing a chmod 777 on the device port.

Now that you have some of the basics down, you’ll need to now look at the install instructions for each individual container. ZWaveJS2MQTT needs some specific setup to get the container to recognize the USB device, but then there’s also some configuration from the frontend to enable things like S2 security and the ZWave network. Thankfully getting HA Container up and running is a little more straight forward.

Here are the docker create commands I use. I’ll break down each one with what each part means.

ZWaveJS2MQTT

sudo docker create --name=zwavejs2mqtt --restart always -v /volume1/docker/zwavejs2mqtt/config:/usr/src/app/store -v /etc/localtime:/etc/localtime:ro -e TZ=America/Chicago --device=/dev/ttyACM10:/dev/zwave -p 8091:8091 -p 3000:3000 zwavejs/zwavejs2mqtt:latest

sudo docker create - create command, run as root
--name=zwavejs2mqtt - name the container
--restart always - always restart the container if it shuts down unexpectedly or when the host (DSM) restarts
-v host/source/path:container/destination/path - used to map specific files and folders between host and container. Necessary to save configurations when updating containers or deleting a container and recreating it
-v /etc/localtime:/etc/localtime:ro and -e TZ=America/Chicago - synchronizes local time (read only mode) with container and sets the container time zone
--device=/dev/ttyACM10:/dev/zwave - maps my USB device to the container. see my comments why I use port 10 here: Aeotec Z-wave stick Gen5 on Synology installation - #60 by squirtbrnr
-p ####:#### - publish command to map the container ports to the host. alternative to using --net=host if you don’t want the container and the host network to have full access. works like port forwarding
zwavejs/zwavejs2mqtt:latest - specify the container image to use. If it doesn’t exist locally, find and download it from the repository

Home Assistant

sudo docker create --name=homeassistant --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -v /usr/syno/etc/certificate:/certificate:ro -v /etc/localtime:/etc/localtime:ro -v /volume1/docker/homeassistant/config:/config -v /volume1/docker/homeassistant/ssh:/root/.ssh -e TZ=America/Chicago --net=host homeassistant/home-assistant:2022.8.7

as you can see, the command is similar, but more simple than the ZWaveJS2MQTT container creation. I let the container auto restart unless I specifically shut it down myself. I still map volumes to save my config files, and I set the time zone, but I let HA use the host network so it can do discovery of new devices and integrations.

You have to make sure the mapped volumes, folder, and files exist on the host before you can start the container, otherwise docker run will fail and errors will be logged. You should not need to worry about permissions of the folders and files on the host as the container will adjust and create the necessary files accordingly.

Follow the above and you will have two containers, one for ZWave management, and one for Home Assistant. Then it’s a matter of configuring each application to talk to the other as well as connect your existing ZWave devices. Those setup steps are found in the install instructions for the respective application. And of course, issue the command sudo docker run <container_name> to start the container you created.

Sure depends on the point of view. 75 EUR per year is more than I am willing to pay for remote access. For many people that is half a day of work. Amazon gives me a streaming service and more for this amount of money.

Actually, no, it’s not point of view - that’s the services they provide :grinning::

  • Access from anywhere
  • Easily connect to voice assistants
  • Fund development of Home Assistant and ESPHome
  • Text to Speech
  • Keep Home Assistant secure

And actually, if you look at statistics - to some it’s also 1/2 a month of pay or even a full month.
But that doesn’t make it less worth it or your choice, a bad choice.
For example, I pay Nabu Casa for some time now and never, ever used ANY of the above services they provide :slight_smile: But I would also never pay a single penny for any of Amazon services :wink:

I would recommend using Synology reverse proxy for certificate and avoiding internal SSL certificate.
Still there are a lot of potential services that may fail when using internally https connectiion, over the http.

But also 2 questions. Why do you map local time - is there an advantage for this? And mapping of ssh folder - that’s for ssh docker container or?

I don’t use reverse proxy. I know I could, but I’ve never invested the time or energy to figure out how to set it up. Instead I use a certificate from Let’s Encrypt on my Synology for DSM and Synology DDNS, then I map the certificate to HA to also use. I then have a local DNS record on my PiHole setup to point my DDNS which is provided by Synology to the internal IP of my NAS. Been doing it this way for nearly 5 years and never had a problem nor has anything complained about an invalid certificate. I won’t get into arguing security, but all I know is this allows me to use HTTPS externally and internally and also mitigates the issue of internal accessibility when my WAN connection is down.

I map local time and time zone on all of my containers because some containers do not have time zone setup during the installation process or default to UTC when first run. This makes sure the container starts up synchronized with the host. There’s only two containers I run which I do not do this: ESPHome and Portainer. I can’t remember if one or both didn’t support setting the time zone or synchronizing time, or if I just deemed it not necessary for those containers.

I map the .ssh folder because I have a public/private key pair generated specifically for the HA Container and my DSM is configured to reject all SSH connections without a known key. I can then use HA to issue commands over SSH to my Synology NAS. If I didn’t map this folder, every time I updated my HA container by removing and recreating it, I’d lose the generated key pair.

2 Likes

Basically it’s the right path to use HTTPS all over the place. Unfortunately it still requires a lot of extra configuration, to put the certificates into the right places, to know and find the right places, the right types, to keep track of the certificates, and so on. You observe yourself how many text you need to describe your setup.

I would even take that hard way, if it would be for me alone. But for other people in the house the system just needs to work. There shouldn’t be downtimes. Extensive configuration results in too much downtime.

In Synology as with in HA, there is a friendly GUI to get you started. In the moment you run into issues, you have no idea where the settings and files did go.

In example you write you map the .ssh folder. How do you remember within months or years what you did there and why? It requires a well done documentation, to touch matters like this quickly without creating a long downtime.

So in case of HA I decide to wait until HTTPS is stable and the default. It’s more economic the documentation is done for many people. Experience shows that even public documentation is always behind.

It’s called research. The only difference between screwing around and science is writing it down. Tons has been documented. It’s in forum posts, it’s in the install documentation. If you think documentation is lacking, feel free to contribute as this is all open source and can be edited by anyone. I got my setup working by doing my own searching and tinkering. But I also started using HA 5+ years ago when the only option was configuration through YAML and documentation was lacking, to which I contributed through forum posts like above.

I will say that once I had everything configured, I haven’t needed to go back and keep maintaining it except when a breaking change comes along. But again, I research those so I am prepared before I update. My SSL certificate auto-updates and because it never changes location, it keeps working. Sure I’ve made optimizations like mapping the certificate location instead of copying it from one location to another every time it auto-renews.

The thing is, SSL encryption is stable. It’s not default because some people use HA strictly internally and do not expose it to the outside world or trust their internal network enough to use plain HTTP. It’s a separate configuration if you want to use SSL encryption, and it has been documented how to do it in many ways. That’s the difference between a simple install and advanced configuration. But you don’t have to use SSL, there are many different ways to connect to HA and some like through Nabu Casa have already done the advanced configuration for you.

And for the record, no, the docker package on synology does not have a friendly GUI to use. It is missing many of the configuration options needed. This has been documented and discussed in depth maybe not here but elsewhere on the internet.

It’s called research. […] Tons has been documented. It’s in forum posts, it’s in the install documentation.

Tons of documentation, scattered all around. You don’t know which is outdated, which is usable. This kind of research is nothing I want to do, when the heating is down and the baby is crying.

I can do tinkering in the garage. A live system has to be simple and be prepared for all emergency cases.

Then HA is not for you. What you want is a systems integrator who manages and maintains everything for you so “it just works”. However, this type of system does not exist. Not even in the industrial environment. I work for a systems integrator and it doesn’t matter how robust we program the system nor does it matter how redundant we make it, things still break (physically and programmatically), edge cases will always exist, and you can never program around the human element. I’ve only ever programmed a handful of systems that have never required modification and have been functioning for many years. These systems did one or two things and did them well, repeatedly. There was no human interaction except to start or stop the machine. All downtime on the machine was attributed to mechanical failure.

The human machinery has taken more than 4 billion years to forge and it still breaks down. There will never be a system that just works in all circumstances. Downtime is inevitable, but it does not have to be frequent. We can program systems that work for years without downtime, but they need to meet certain criteria, like a ringfenced environment which does not change over time, mostly driven by machines or automated processes, etc.

squirtbrnr is IMHO correct in his assertions.