For the past two weeks I haven’t been able to use my Conbee II USB device. Tried the forums and submitted an issue on core. Logs also show that the Conbee is correctly detected by HAOS but ttyd keeps segfaulting (screenshot below)
So far, nobody seems to have an idea of how to solve this.
Neither deCONZ nor ZHA are able to connect to the usb-serial port (Conbee II , /dev/ttyACM0).
The numerical GID ownership of the device is 18.
One thing that bugs me is that HAOS dialout GID is 18 vs 20 on core and my laptop, for example. On the other hand, audio GID is 18 on core vs 29 on HAOS and, again, my laptop.
So, unsurprisingly, the usb-serial port is owned by audio on core and dialout on HAOS.
I don’t know how either deCONZ or ZHA tries to access the device, as root or as member of dialout. If it’s the latter, obviously, no access.
Bottom line is: a simple change of dialout GID to 20 on HAOS could solve the permission issue. But this is not possible on the live system with a simple groupmod (read-only squashfs)
Yeah, that’s the million dollar question. I first noticed the zigbee issue when I wasn’t able to connect to my thermostat via deCONZ. It was almost midnight and I was not really thinking when I opened the android app to see what was going on. Unfortunately, I enabled a system update that I had just noticed and also deleted my old backups – yes, I know!
Long story short, I do not have a working system state to compare or go back to!
Also this seems to be an os issue (or addon).
To me it looks like a HAOS issue (if it’s related to the GID): the docker containers (core, deCONZ, etc.) seem to take the GID numerically from HAOS. I am not a docker expert, though…
Yes I will do that. I wanted to try the forums first, hoping I will get some pointers on how to proceed or at least confirmation that I am not the only one having the issue. Speaking of which, are you using any usb-serial adapters on your system? If you have access to the HAOS console and not too much trouble, I’d like to know the dialout and audio GID on your system (grep dialout /etc/group && grep audio /etc/group)
Regarding submitting a new issue, my experience with the issue on deCONZ was quite dissapointing. The response I got from the developer that supposedly new the most about the add-on, paraphrasing, was “it’s open source not commercially supported software, so please don’t bother me.” Not a good omen for their nascent enterprise.
ZHA is part of Home-assistant, which on HAOS runs in a docker container managed by the supervisor (also running in docker). That core container should run in “priviledged mode” on HAOS and have root access to all USB devices.
Did you try to specify the path to the usb in zha by /dev/serial/by-id/ instead of ttyacm0?
Most devices need at the very least the serial device path, like /dev/ttyUSB0 , but it is recommended to use device path from /dev/serial/by-id folder, e.g., /dev/serial/by-id/usb-Silicon_Labs_HubZ_Smart_Home_Controller_C0F003D3-if01-port0
A list of available device paths can be found in Configuration > Add-ons & Backups > System > Host > dot menu > Hardware .
You are right, but of course the /dev/serial/by-id device node will have the same permissions as the /dev/ttyACM0 node as the former is just a symlink to the latter.
ZHA is part of Home-assistant, which on HAOS runs in a docker container managed by the supervisor (also running in docker). That core container should run in “priviledged mode” on HAOS and have root access to all USB devices.
Even if the container runs with root access, the contained process (e.g. ZHA component) should still access the port with the dialout group permissions only, instead of root – that’s how it is typically done for security reasons, but not sure for HA. On my system at least, dialout on HAOS maps numerically to audio on Core/Supervisor. Since (to my understanding) HAOS controls access to that port via the group membership, accesing the port with dialout on Core/Supervisor results in access denied on HAOS.
Did you try to specify the path to the usb in zha by /dev/serial/by-id/ instead of ttyacm0?
Yes, that is how I did it, although it shouldn’t matter since the by-id path is just a link.
Most devices need at the very least the serial device path, like /dev/ttyUSB0 , but it is recommended to use device path from /dev/serial/by-id folder, e.g., /dev/serial/by-id/usb-Silicon_Labs_HubZ_Smart_Home_Controller_C0F003D3-if01-port0
A list of available device paths can be found in Configuration > Add-ons & Backups > System > Host > dot menu > Hardware .
See above – I did confirm the path by checking the HW listing.
@nickrout
I did not see a deconz add-on repo so I submitted the issue on core. Also submitted one on HAOS this morning. There are several issues there specifically mentioning lost access to Conbee after updating HAOS. The root cause could be the same GID-mapping-related permissions issue.
Unfortunately, looking at the code/wiki, the only way I can see of changing dialout GID on HAOS is to do a full rebuild of HAOS with buildroot. That’s a bit too much for me at the moment when I am not 100% sure it would even fix my problems. Hopefully the HAOS devs will be a bit more responsive…
@mwav3
Thanks for mentioning the CONFIG idea – good to know for future! I did use part of it before to get SSH access to HAOS.
I did consider writing a udev rule but realized it would not work because, at least on my system, there is no GID=20 on HAOS. That’s the standard dialout on most systems, the containers like deCONZ and Core included.
It doesn’t matter what the gid number is so much as whether the deconz binary is executed ss a user in the same group
Correct! I meant I wanted to minimize the changes and work needed. As in have the GID in HAOS match the value expected/required by the containers rather than the other way around.
But, apparently, permissions don’t matter as all add-ons run as root inside the containers, which also run as root on HAOS!!! So, I guess, think twice about using a camera or a microphone with a HAOS-based system
I read their response. With the root and privileged way HAOS runs I didn’t think it was a permissions issue.
I get their point that running home assistant as an “appliance” stand alone device with just hassos and the addons probably isn’t much of a security concern even with the root and priviledged access. However, running it “supervised” with all that access on another machine with other software (which actually is not supported) would give me second thoughts about security.
If you want more control over the OS you can try Home Assistant Container. I run it that way and use zigbee2mqtt in docker for zigbee. That way I’m only giving the containers specific access to what I want them to have, and I don’t use priviledged mode. I’m still running docker on root though, but I see more support in docker for “rootless” mode.
Not sure how rootless would work for home assistant container though.
Getting back to the conbee not working, did you try a firmware update?
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
Assuming the other containers also run as root (like it apparently happens with HAOS-based systems), it seems to me that running HA in a priviledged container on a regular system is actually worse than running it on HAOS: the latter has much smaller vulnerability surface (read-only squashfs, most commands don’t work, etc.). In other words, to my understanding, the base/host system is the connection between all containers as far as root priviledges go. So the more you can do as root on the host the more likely is that a rogue container can do nasty things.
Running the rest of the containers without root should be more secure.
That way I’m only giving the containers specific access to what I want them to have, and I don’t use priviledged mode
I am actually curious how you run the other containers unpriviledged while running HA as root. Can you post some links please?
I’m going to guess that the decision of the HA devs to run everything as root is mainly due to the UID/GID mismatches between the buildroot base image (I think it’s Alpine) and the docker containers, likely running debian or an Ubuntu image. But I don’t know enough about the HA architecture to be sure.
The more I learn about this the more I’m inclined to switch to a HA Core Install with a virtual environment. According to the installation instructions HA then runs as a homeassistant regular user which only has dialout,gpio,i2c additional memberships, which to me it seems it’s all it really needs.
Getting back to the conbee not working, did you try a firmware update?
I agree, that seems the only remaining option. Will try soon and update here.
I agree, and don’t like priviledged mode and don’t use it. The documentation used to not say to install Home Assistant container in priviledged mode. When they changed it, I posted concerns here
First, I don’t use zha, and don’t need to map any devices to the home assistant container.
I run zigbee2mqtt, installed per these instructions Docker | Zigbee2MQTT . I also run zwavejs2mqtt in docker per these instructions https://zwave-js.github.io/zwavejs2mqtt/#/getting-started/docker . Zigbee2mqtt communicates to Home Assistant via mqtt through a mosquitto instalation running on the “bare metal” of the host Ubuntu machine. Zwavejs2mqtt communicates to the zwavejs integration on port 3000.
Both zwavejs2mqtt and zigbee2mqtt do not need priviledged mode (to be clear, the docker versions, NOT the addon versions), and use the device flag instead to map in the usb device to the docker container. With the device flag, zigbee2mqtt only accesses the zigbee stick, and zwavejs2mqtt only accesses the zstick.
By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all devices (see the documentation on cgroups devices).
The --privileged flag gives all capabilities to the container. When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
If you want to limit access to a specific device or devices you can use the --device flag. It allows you to specify one or more devices that will be accessible within the container.
So, the less the container needs access to, the better, security wise.
It probably is the most secure way if done properly, but also the most complicated
I agree, and don’t like priviledged mode and don’t use it.
So you also run the HA container unpriviledged or just the other ones (zigbee, zwave, etc)?
Both zwavejs2mqtt and zigbee2mqtt do not need priviledged mode (to be clear, the docker versions, NOT the addon versions), and use the device flag instead to map in the usb device to the docker container. With the device flag, zigbee2mqtt only accesses the zigbee stick, and zwavejs2mqtt only accesses the zstick.
I see! So your solution is to run the “original” (so to speak) docker images instead of the add-ons and connect everything via MQTT. I suspect it’s fast and reliable when everything is on the same host.
Thanks for the pointer to running the images with the device flag! Good to know.
I run 10 different containers and none of them, including home assistant, run as priviledged.
Yes, since Home Assistant container is basically just core with no addons, I install the equivalent original docker image instead of the addon. Some use mqtt. Some just open ports with the -ports option (ie node red opens port 1880) and communicate back to Home Assistant over the lan/docker network.
There are a lot of install options for Home Assistant, but the main reason I did this is to run other software on the same machine, without needing a VM. It’s definitely very fast and reliable.