Sanity check: I cannot use and do this, right?

I am trying to use a VM HA and then communicate with a zwave stick on a raspberry pi as a zwave server, similar to this:

Do I understand correctly, that (at least in the vhdx version I am using) does not have the module to support the tcp connection to the remote server (vhci-hcd) to create the usbip service as described.

So short of submitting a request/change to the system, I need to use some other installation, such as docker?

Which means I cannot use the addons from “store” and instead need to install them manually?

And some of those features may or may not exist separately? Or does everything (relevant) in also exist separately for the docker image?

Am I better off using a different configuration for the linux install? (there are almost too many options for install, all are clear how to do it, but not really clear what the implications are, especially the implications relative to planned future HA development).

Any general advice of an approach to doing this? I would really like to use zwave in a HyperV VM, and getting access to the USB device is not directly possible on HyperV, some form of middleware is needed.

Thanks in advance,


[Not: This is a complete rewrite of my initial reply as I learned a lot more]

So let me propose some answers to my own question, in the hopes someone will chime in and correct me if I have it wrong. I spent the weekend trying a ton of alternatives and experiments, and reading. A lot of what I am about to write is supposition not documented fact. And it’s a long ramble so I will bold some conclusions for those with short attention spans.

First – vs. other alternatives: My strong impression is that the trajectory is toward making easier, with most modules going in and integrating now (especially post ingress) with almost no need for documentation. The counterpoint is that, at least relatively speaking, using alternative installation methods is getting much harder. Partially this is coming from (and in particular the “store” type modules) getting easier, but partially this is coming from the rapid changes occuring and the latency (or lethargy) in updating the documentation for alternatives. After all, if most people are just clicking on a tile to install (say) configurator, updating all the documentation for its install on docker or on another vm becomes less likely.

Configurator is a good example - I got a version running in a VM and then found documentation to install configurator – then found another separate version that was different and clearly reflecting a change (as it referred to the old way). I followed it and it failed – my fault perhaps. But I do a lot of linux admin as well as programming and after 30 minutes or so I just punted and moved on. Clicking a tile is so much easier.

So the gist of that aspect of my experience is I want to stay with a format, so as to be more in keeping with Home Assistant’s own development path.

So…Hyper-V versions. Initially I believed that the only way to get running on HyperV was the pre-configured vhdx container. This ran fine, but did not have a conventional linux shell where I could add service (like usbip).

I learned later that it is really easy to run inside a docker container on a hyperV linux VM (e.g. ubuntu is what I used). That works very nicely, and provides a customizable linux while retaining (mostly) the advantages of

But… I have a z-wave stick. And that is a real challange with Hyper-V, as Hyper-V will not pass raw hardware level access for USB through like VMware would, or virtualbox.

So off to Virtual box. Never used it before, a bit of going around in circles to get it running and getting the Nortek stick visible Big issue is windows kept turning it off, had to find drivers, put in the wrong driver to just get it visible in windows to turn off power management, but then it worked fine.

But Virtual box is like a toy in some ways – it won’t run as a service (so then introduce new tools to make it pretend), but most importantly – NO VLAN SUPPORT. Sorry, I’m spoiled, I can pass trunked or tag-identified VLAN’s into HyperV in any way I want. But despite several hours of trying I could not get a virtualbox VM to be on a different VLAN than the (one nic) host I was using. Two nics would solve this, but really… it did not look like a solid solution. And I personally detest dealing with anything Oracle – I do not trust them to be there, and work, down the road.

So with a lot of articles (a central one being here ) I tried setting up an rPi as zwave server. While I had quite a few false starts, this ends up working fine when you start up carefully.

Before getting to that I should note that if you search for (non-Home Assistant) related usbip postings, you will find that some linux versions are running older versions, some do not install a kernel compatible version, so you need to make sure the client and server side match (though I had it working with a non-matched version also). I ended up using ubuntu on the rPI not raspbian, though not sure that mattered, but it was not clear that raspbian had a fully compatible version. Here are some details:

Anyway… the issue is that I find now I have lots of issues if I reboot: reboot the rPi server and zwave just stops communicating until I restart the HyperV client’s usbip service (or reboot it). Rebooting the client side from ubuntu works, but rebooting from (from inside the docker container) reboots, but the service will not start until the rPi is restarted. I hesitate to call it “unstable” as it seems to work fine when both are running, but it is certainly fragile when I try restarting things.

So… is this better than the rPi (or maybe more to the point a Nuc that I could run everything on faster) – maybe. I love Hyper-V snapshots (for rolling back bad changes) and how fast things run. But I hate things I want to depend on that are fragile.