Synology Docker two problems after update to latest version

Hi,

I am running three docker container using different ports on a Synology NAS. One of them is giving me two big problems after I updated the image to the latest version:

  1. Z-Wave doesn’t work any more and I get the following output in OZW Log:
2020-04-06 02:04:25.141 Always, OpenZwave Version 1.4.3469 Starting Up
2020-04-06 02:04:25.324 Info, Setting Up Provided Network Key for Secure Communications
2020-04-06 02:04:25.324 Info, mgr,     Added driver for controller /dev/ttyACM1
2020-04-06 02:04:25.324 Info,   Opening controller /dev/ttyACM1
2020-04-06 02:04:25.324 Info, Trying to open serial port /dev/ttyACM1 (attempt 1)
2020-04-06 02:04:25.324 Error, ERROR: Cannot get exclusive lock for serial port /dev/ttyACM1. Error code 11
2020-04-06 02:04:25.324 Info, Serial port /dev/ttyACM1 opened (attempt 1)
2020-04-06 02:04:25.324 Detail, contrlr, Queuing (Command) FUNC_ID_ZW_GET_VERSION: 0x01, 0x03, 0x00, 0x15, 0xe9

I did make sure that I am using the correct serial device:

root@Lakewood:~# ls /dev/ttyACM*
/dev/ttyACM1

It used to be ttyACM0, but I unplugged the stick and plugged it back in while trying to fix it. It didn’t work before too, when it was still on ttyACM0. The stick is color cycling as usual.

My zwcfg.xml is empty now. Not sure if that is normal, if the stick can’t be found

  1. I am getting this error message in the log:
2020-04-06 02:01:16 ERROR (MainThread) [homeassistant.components.http] Failed to create HTTP server at port 8127: [Errno 98] Address in use`

I did check that the port is not used (netstat -plant |grep 8127), but the error message is showing up anyway and weirdly, I can access the frontend on port 8127. And I did try moving the frontend to different ports that were free.

http:
  server_port: 8127

Not sure if both problems are related and I would really appreciate any help.

Thanks,
Julian

What are the permissions on /dev/ttyACM1 or do you run your HA Core docker container in privileged mode?

Privilege Mode. Did check rn to be sure.

It used to work before any only failed with the update. I did: Download new image, Stop Container, Clear, Start again. Worked at least 20 that way before.

Thank you!

Looks like it’s a problem with 0.107.7. I went back to 0.106.5 and it works without any issues.

I am running 0.107.7 without any issues. Synology and HA Core docker with zwave. Others have reported issues with zwave in this version, but I have not experienced any. I just went to one of the few things I knew whenever my docket container had problems talking to the zwave dongle.

Same for me…

2020-04-06 21:20:55.824 Info, mgr, Added driver for controller /dev/ttyACM0
2020-04-06 21:20:55.824 Info, Opening controller /dev/ttyACM0
2020-04-06 21:20:55.824 Info, Trying to open serial port /dev/ttyACM0 (attempt 1)
2020-04-06 21:20:55.824 Error, ERROR: Cannot get exclusive lock for serial port /dev/ttyACM0. Error code 11
2020-04-06 21:20:55.824 Info, Serial port /dev/ttyACM0 opened (attempt 1)

root@NAS:~# ls -la /dev/ttyACM*
crw------- 1 root root 166, 0 Apr 6 21:28 /dev/ttyACM0
root@NAS:~#

enabled Privilege Mode and still the same

You might try deleting your container and rebuilding from scratch from the image.

I fought with a similar issue for three days that caused a variety of issues with connecting to different integrations. I also had the “Failed to create HTTP server at…” in my logs. Turned out I was getting this error because there was a second instance of HA running at the same time. There was no indication in the docker controller, and much of the UI worked just fine. But I did notice that CPU/Memory usage was higher than normal and was seeing more polling delays in the log.

While I was in the process of moving to MariaDB to solve the queuing issues (didn’t solve my problem, but extremely happy with the much faster response now) I ran across some notes on this in an issue on the main HA git. I am unable to find it again or I would link it for you. Perhaps someone else can help. Bottom line is that the Execution Command in the container had information on the mounting point that was causing a second instance to fire up inside the container. The Execution Command should be just “/init”

I am not smart enough to provide more details. But if you have just been clearing your container to upgrade or a while, this may be your root cause as it was mine.

Anyway, it is an easy fix and now my logs are squeaky clean and no more race conditions on my serial port.

Hope this helps.

Finally found it, have a look https://community.home-assistant.io/t/0-107-multiple-lovelace-dashboards-adds-helpers-new-media-player-card/179393/300?u=chasut

1 Like

@chasut has a good point. It’s possible there is a remnant of a previous container hanging around. Docker on Synology does have some issues with that and I’ve had to use portainer to manage it or even use the docker command line over ssh to clear out old images and containers. If a container already has the zwave device in use, another container will report errors but not necessarily be prevented from starting. Could also try rebooting your NAS if possible. I’ve had to do that to clear up docker issues as well.

1 Like

I have rebooted my NAS multiple times but still get the

Error, ERROR: Cannot get exclusive lock for serial port /dev/ttyACM0. Error code 11

Error on 0.107.7

Try recreating the container from scratch and pointing it to your existing config

This just started for me as I hopped from 107.x to 108.3.

226 root      0:38 python3 -m homeassistant --config /config
251 root      0:27 python -m homeassistant --config /config

This is the official Docker container from HA team. I don’t recall seeing this sort of process weirdness prior. Anyone else seeing the same? I checked lsof and noted the two processes with /dev/ttyUSB0 and that led me to going into the container and finding the above.

Seems like we may just have competing HA services running, causing one to win the lock race.

I just filed https://github.com/home-assistant/docker/issues/113 to get some confirmation on this.

1 Like

So the situation we have is related to the Dockerfile changes made during 107.x apparently. The “fix” is review your container configuration for a ‘cmd’ configured. I exported my configuration in the Synology interface, edited the file to empty the value for ‘cmd’ and imported the new configuration. No more ZWave exclusive lock message and things seem to be working. Hope that helps!

@mstanislav could you please provide some details how you solved this issue. Because I have the same problem on my Synology. Thanks in advance)

In my case, “docker ps” only showed ONE container, however, this gave me TWO results:

root@Lakewood: ~ # ls -l /proc/*/fd | grep ACM
lrwx------ 1 root root 64 Apr 6 06:30 10 -> /dev/ttyACM0
lrwx------ 1 root root 64 Apr 6 06:30 10 -> /dev/ttyACM0

That means that there are two processes trying to access the z-wave stick, amirite?

Yep, there’s only one container but two instances of home assistant running inside of it, each which tries to use the same device. So what you are seeing is what I’d expect.

If folks create a new container instance and don’t pass in a ‘cmd’ value (which was most likely set before) all of this should be resolved.

1 Like

Thank you for your effort, Mark. Works perfectly now :heart_eyes:

Just for completeness: How would I inspect my container config for that “cmd” value? Export the config and look through the JSON file?

1 Like

I went into the Synology Docker GUI, chose my HA container, selected to export its configuration (only), edited the JSON to remove the value I had set for ‘cmd’ key, saved, imported back into Docker. Just be sure to at least stop your previous HA container, as you will now have two (unless you decide to delete the old one before testing the new one).

The easiest way to discover if you are suffering from this problem, is to go into the details of your running container and check the “Process” tab. If you see TWO lines similar to the first one in my screenshot (with or without the “3” after python & different process IDs), Home Assistant is running twice and you should recreate your container.