RPi as Z-Wave/ZigBee-over-IP server for Hass

Make sure you have enabled the kernel module and then try running the usbip command from the command line. There should be an option to list connected ports and list available ports on a remote host.

Sorry I don’t have the exact steps (out of town). But the kernel module should be in the link above, the commands can be found with usbip --help

I already loaded thekernel module and checked if they are loaded:

When I’m trying to use usbip, I’m always getting the same error:

Am I missig a symlink?

@Niklas - I get this error when the host is offline or unreachable (the device with the physical USB device). Check that the service is up and running on the host, that it’s exporting the correct devices, and that it’s reachable from your client.

Sorry that I’m still bothering you :smiley:
From the client I can ping the host, usbipd ist listening on port 3240 on the host so that should be fine…

I just noticed the double slash in the path (/hwdata//usb.ids, last screenshot). Could that be the reason?

Your usbip list command from the client should look like:

usbip list -r 192.168.178.31

If that works, then at least your usbip server with your zwave stick is working.
In your client’s system service script, you can either modify the path for where your usbip command lives, or create a symlink.

To modify: get the path with where usbip, and use that path in your /lib/systemd/system/usbip.service

To symlink: ln -s $(which usbip) /usr/local/sbin/usbip
In my case, my usbip was in /usr/sbin/usbip

I have spent the last several days exercising this approach extensively, with usbip as a service on an rPi and hass.io running in a Hyper-V VM. I like the result a lot (thank you), and it appears stable in use, but not on any kind of restart.

Rebooting the client (Hyper-V VM) from the outer ubuntu OS works OK, rebooting it from the hass.io docker causes the rPi server to think the connection is still in use and it will not then reconnect when the hass.io side restarts. Rebooting or restarting the service on the rPi, the starting the service on the Hyper-V client, THEN restarting the HA service from the HA configuration page will restore it.

Rebooting the server (or restarting the usbip service on the rPi server) will break the connection but not cause the service on Hyper-V to fail, so it does not restart, so it just never works again until you restart it (and then restart HA again).

This is workable but awkward – any time I am rebooting either side requires care and checking to see that everything is functional. I can’t find any kind of watchdog or heartbeat setting that might automate this. Since in the rPi reboot case both ends have the service running but not functional, it is hard to script something to notice (the only clue I have is that a HA restart gives an error on the zwave integration startup).

Those using this – have you found a way to make the client/server reliably survive reboots?

Linwood

So I was having problems with this as well. I run hass.io in Docker on a RockPi in my network room, and the physical z-wave stick runs on a Raspberry Pi MagicMirror2 by the front door. Your comment inspired me to finally implement my plan for a more robust restart connection (especially after it died from a power outage yesterday).

I’ve updated my gist here: https://gist.github.com/shbatm/1526174a9d285c342eb05c9efa35c02d

This now has a “usbipManager.sh” (basically an expanded init script) script which does a few extra things:

  1. Before connecting: Ping the host and see if it’s available. If not, wait and retry using an exponential delay (1s, 2s, 4s, 16s, etc. to 256s). This gives time after, for example, a whole home power outage, where everything comes back at the same time, but you have to wait for the other server to come up first.
  2. After confirming the host is there, check if the USB-IP service is available. If it isn’t, try to restart the service on the remote machine using SSH. FOR THIS TO WORK you must have Public Key Authorization set up both directions on the 2 machines. This will “fail” the script on the client, but the server will “call back” on service restart and restart the client.
  3. If everything is good, mount the device.
  4. After mounting the device, restart the homeassistant docker, if it’s already running.

On the USBIP server side:
The server’s .service file actually includes a line to call the client (via SSH) and attempt to restart the client’s service when the server’s service is available. It will continue on if it fails, but if it succeeds, this will cascade to re-mount the port on the client and restart Home Assistant. This is useful if I have to restart the MagicMirror for any reason… when I do, it will call the Hass.io server and tell it to reconnect and restart.

2 Likes

Thank you @shbatm. I had vague ideas of doing something similar but your code will save me a lot of time. I had gotten distracted with a related issue however, and wonder if you have seen it.

When I restart the service on the client, sometimes but not always, it changes the device names. On the server I have /dev/ttyUSB0 and /dev/ttyUSB1 for a Nortec stick. At first they match exactly on the client. When I just do a sysctrl restart usbip on the client side they move to /dev/ttyUSB2 and /dev/ttyUSB3. If I restart again they do not advance again, though as I write this I realize I also never change HA’s config to use the new devices, so maybe the issue is some kind of in-use condition not allowing the old device names to be reused?

I was experimenting with udev rules to force the naming of the client side usbip devices, with zero luck. It is almost as though udev is not involved in their creation. Now that I have thought of the in-use issue, maybe I can change the stop condition to always restart HA’s container and see if that releases it.

But have you seen this issue and have any insight?

So far my only solution has been a reboot of the client’s containing VM, and I am loath to script that in the service for fear some condition will just get it stuck in a boot loop. Of udev would work, even if it failed to create the device because of a name collision, my thought was the restart would fail and I could do something – dealing with a new device name is much more of a pain in terms of scripting, I would need to change the configuration.yaml script inside the docker container.

Postscript: I see you have a udev rule in there also, I’ll try yours (I was using subsystem usb as that’s how it showed).

Further to the issue of device names, it appears (e.g. the first answer here) and docker (how I’m running hass.io) does not allow symlinks (discussion here, apparently no plans to allow). So as best I can tell I need to find out why I am having shifting names.

I take it you are not using docker for hass.io? As you are using a symlink (which I can get working).

This is what I’m dealing with. I thought perhaps it was caused by the docker containers remaining active and holding a reference to the device but apparently not.

root@ha:~# service hassio-supervisor stop
root@ha:~# docker stop homeassistant
homeassistant
root@ha:~# ls -lat /dev/ttyU*
crw-rw---- 1 root dialout 188, 1 May  3 15:14 /dev/ttyUSB1
crw-rw---- 1 root dialout 188, 0 May  3 15:14 /dev/ttyUSB0
root@ha:~# systemctl stop usbip
root@ha:~# ls -lat /dev/ttyU*
ls: cannot access '/dev/ttyU*': No such file or directory
root@ha:~# systemctl start usbip
root@ha:~# ls -lat /dev/ttyU*
crw-rw---- 1 root dialout 188, 2 May  3 15:43 /dev/ttyUSB2
crw-rw---- 1 root dialout 188, 3 May  3 15:43 /dev/ttyUSB3

I cannot figure out why a service restart is, seemingly every time, changing the names of the devices. Forcing with udev to a symlink does not help as they need to be visible inside docker and it doesn’t do symlinks. I’ve done this stopping all docker containers and even docker itself in between and no difference, every time I restart I get 2 and 3. Restart again and still 2 and 3 (though I have yet to actually try using them as 2 and 3).

Postscript: To fill this out a bit, the devices do not appear “real” in some way inside docker. If I leave docker (and HA/Hassio) running, and restart usbip, then try to list the devices from inside docker they are not quite there.

root@ha:~# docker exec homeassistant ls -la /dev/ttyU*
ls: /dev/ttyUSB2: No such file or directory
ls: /dev/ttyUSB3: No such file or directory
root@ha:~# docker exec homeassistant ls -la /dev
<long list filtered for here> 
crw-rw----    1 root     dialout   188,   0 May  3 12:35 ttyUSB0
crw-rw----    1 root     dialout   188,   1 May  3 12:35 ttyUSB1

I have no idea what it means to be able to list them in the folder, but not with a wildcard, but must be a clue. Before the restart you can list them with a wildcard.

Postscript #2: If I restart usbip (to get ttyUSB2), then change configuration.yaml to reference ttyUSB2 then HA works. If at that point I restart usbip I get ttyUSB4. So it definitely appears to be some kind of in-use thing that is forcing the creation of new names.

Last Postscript: I have tried all sorts of combinations, restarting the server and client’s service, restarting docker, etc., and I cannot avoid the change to USB0->USB2 with anything I do, and cannot figure out a way to fix it short of a reboot. A Symlink won’t work inside the docker container, which is the only workaround I can find.

See my post on the other thread:

I do use a udev rule for the client host for convenience, but inside Hass.io I use the by-id serial device, which is persistent.

I do use a udev rule for the client host for convenience, but inside Hass.io I use the by-id serial device, which is persistent.

I didn’t even know that existed, but it does seem to work. It requires a restart of the docker homeassistant container for a new version to be visible (after a restart of usbip) but that was kind of expected.

Thank you. I was going around in circles trying to fix the USBx name difference, and did not even know this existed.

Does beg the question what kind of dangling in-use condition is causing it to increment, but it does not seem to do any harm.

Thank you!!

Glad it’s working. I pulled my hair our for a few days when I was trying to get it set up (even rolling my own Docker image with socat/usbip inside) until I stumbled on the by-id tag by dumb luck.

I have the exact same wish and situation as you have, @tbrasser, but I’ve yet to come to a good enough solution.

I’ve tried usbip for a while, and while it definitely works, it’s a bit of a hassle and requires higher maintenance than I’d like, as well as being very tied to a single machine for the HA container. Also, each restart of HA implies restarting the Z-wave-network as well, which is slow and unpractical. The same argument would apply to socat/ser2net as well, since it relies on software to make a remote serial device in /dev on the machine running the HA instance.

This means, for instance, that I can’t run HA in a container cluster environment, such as Docker Swarm or Kubernetes as I require a local, machine specific resource, that is /dev/tty[something].

I suppose, I could run usbipd in a privileged sidecar container, and as such would be able to run it in a cluster, with the negative point about this solution being running the sidecar as “privileged”, and having to have strict order on container deployment so that I can mount in /dev/tty[something] into the HA container.

I would wish for a Zwave over MQTT solution or similar that actually worked great also for situations as in your “EDIT3”, @tbrasser. I’m not entirely sure how it could be implemented to still take advantage of all the Z-wave features. I think the computer with the Z-wave-stick would run openzwave, and act as a gateway with an API.

I think perhaps that the solution is coming from SiLabs itself, in the form of Z-Wave 700 with Z/IP (Z-wave to IP), as described here: https://www.silabs.com/products/wireless/mesh-networking/z-wave/700-platform/z-wave-700-gateway-development

This is a UDP wrapper for Z-wave command classes, and handles Z-wave network management, security and has a “mailbox” for battery driven devices.

Apparently, we should be able to buy a “Z-Wave 700 Development Kit” (SLWSTK6050A) for a cool USD 379.

Any thoughts?

1 Like

Is anyone using a second HASS instance with just Z-Wave and MQTT Statestream?

Hi

Have you tried making the windows client run again? Any luck?

First of all, nice guide and idea.
Difficult to judge if usbip or the ser2net method is better to be used.

I am reading a lot and planning to have hassio instance dedicated to the wireless receiver (USB dongle)

So my setup will be similar to others.
1.) Hassio running docker on a VM installed on proxmox.
2.) Zwave dongle on a RPi in a more central place of the house.

Yesterday I was almost done with my research about usbip and ser2net and the decision has been made to use usbip.

But then I was reading some more MQTT stuff and found that Zwave to MQTT could be an alternative.

So far I understand the following us needed to use dedicated.
1.) Hassio installed with the standard MQTT in my case on Proxmox
2.) Zwave to Mqtt installed on the RPi with the zwave dongle plugged in.

As far a I understood the inclusion will be done on the RPi and pushed to Hassio automatically.

With MQTT the devices can be read and controlled.

Any thoughts about this idea?
Are you aware of any limitations?

BR Christian

Zwave2Mqtt is what I switched to. A bit more hassle to set every device up (I have several templates for my lock) but by far more stable.

Great to hear you are using it already.

But once again, this means you have Mosquitto as Broker on Hassio and Zwave2Mqtt on your RPi?
Does it mean the inclusion is done on the RPi?
What exactly is more hassle to set every device up`? Is it about the setup in Zwave2MQTT or the way the messages are sent to hass.io?
Highly appreciated if you could get me onboarded :wink:

thx
Chris