Hi Rick, not sure how I can see that in the logs. I do not now how to look that up in linux…
Some more info. I investigated the homeassitant host, the hypervisor host ans discovered some weird errors. I google them and saw some hits but no solution. And all on esxi 6.5 and 7. I have 6.7
Preventive rebooted the ESXi host which had been running for 364 days :-).
But still errors in homeassitant.
Below errors were in esxi before reboot. Now gone in esxi.
e[7m2021-12-06T11:16:42.685Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.685Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.686Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.686Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.693Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.693Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.694Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.694Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.694Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.694Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.694Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.695Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.696Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.696Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.696Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.699Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.700Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.701Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.701Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.701Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.701Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.701Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.702Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.702Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.702Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.702Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.705Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.706Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.706Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.706Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.706Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.706Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.707Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.708Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.708Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.708Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.708Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.709Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.709Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.709Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
e[7m2021-12-06T11:16:42.710Z cpu3:2097577)WARNING: vmkusb: transfer error [18]: endpt = 0x82, nframes = 1, timeout = 0, interval = 0, max_packet_size = 32, max_frame_size = 32: Failuree[0m
I have this host (6.7u3) running for over 2 years now with usb passed trough all the time:
[root@localhost:~] lsusb
Bus 001 Device 006: ID 0781:5583 SanDisk Corp. Ultra Fit
Bus 001 Device 005: ID 8087:0aaa Intel Corp.
Bus 001 Device 004: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 001 Device 003: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB
Bus 001 Device 002: ID 1cf1:0030 Dresden Elektronik
Bus 001 Device 001: ID 0e0f:8003 VMware, Inc. Root Hub
Never had issues… even with OZW Beta. Until now with ZwaveJs I see the disconnects…
I do see some topics about 6.5:
https://kb.vmware.com/s/article/2147650
But none of them clearly mention 6.7, only 6.5
I do not “dare” to execute the command esxcli system module set -m=vmkusb -e=FALSE
on my 6.7 machine since I might get into worse troubles then.
Any advise?
What’s also to add. I have no error at all for my deconz stick…
Did you install the update yet? There was a bug in the old version. I posted about it here and it could be the cause of your errors. I would update and check again if you haven’t already.
Yes I did that that directly
The logs appear to be from the addon. What do the logs set at “silly” level from the zwavejs server itself show? They can give more of a clue as to what’s going on. To get them go through these screens/steps:
Someone hinted that the USB is disconnecting, but I don’t think it’s that. These are what logs look like for a disconnecting USB, and yours don’t look at all like that.
Yours seem more like a routing issue, like the controller can’t get a command through to a specific node and keeps retrying. You mentioned sometimes there is a lag so maybe it fails the first time and succeeds on the retry? The silly level zwavejs logs should reveal more clues about what command exactly is failing to go through and to which node(s). Try posting them here. Its likely the developers will also need those on your github issue to further diagnose.
Hi Tim, thanks for taking the time to help here!
Is the log of the addon different than the one pointed by you? I see the same output from silly
in the addon
. I think it is just a matter of selecting the right choice in the integration
?
I have captured a part of the log this way. I did try to heal around 2 weeks ago but that took 2 days and I stopped it…
Then I tried this morning to heal and took around 3 hours an I stopped it.
Did anyone ever had a “successful” heal?
I don’t use the addon myself (use zwavejs2mqtt in docker), but i think if you set them to “silly” in the addon configuration they’re probably the same. I know the ones generated on that integration configuration screen are the right ones for the zwavejs server though.
Rarely, and there was some discussion on another thread the network heal logic might actually be flawed, and could, at least temporarily, make issues worse.
70 Zwave device network stuck healing for over 24hrs - #20 by HeyImAlex .
I heal individual problem devices vs the whole network now, which for me works better. A node with log errors, or the device responding slowly, are the ones I heal.
I uploaded more logging to the github issue
Does anyone know how to interpret or where to find how to interpret the logging? What does CNTRLR
SERIAL
DRIVER
mean?
This seems to be such an event of mine:
2021-12-11T17:17:41.027Z SERIAL « [CAN] (0x18)
2021-12-11T17:17:41.028Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2021-12-11T17:17:41.130Z SERIAL » 0x0109001348022602250aa4 (11 bytes)
Take a look at the zwavejs documentation, specifically for the failed to send error
Failed to send the message after 3 attempts
If this happens, your network is either extremely busy at the moment or something is really wrong with the Z-Wave controller. Try waiting a bit, try removing and re-inserting the stick or try restarting your host.
The documentation overview also explains the difference between controller, driver, etc
I looked at your logs and there seems to be a lot of traffic in your network. Node 61 seems to cause the most issues. What is it, how far is it away, and have you noticed any issues with its performance? I’d try healing it individually (go to the device in the zwavejs integration and select “heal device”).
You might also be “stuck” in a previous heal attempt. Try removing the stick or restart the host as suggested by the documentation, and don’t do another whole network heal. Also, are you polling devices at all? If so try to decrease the frequency of polling. It’s also possible you have an automation stuck in a loop continuously calling a switch to be on/off. I’d check those too.
Update
I see they are working on this under a new issue, hopefully the developers can get to the bottom of it for you
Hi Tim, very much appreciated!!
node 61 is a cover device… I must say it’s an old one, but I have more of them (8 in total)
I successfully healed ID61 now.
I have restarted a lot already (also with the recent 2012.12 update and OS update and ESXi reboot, etc.)
May I ask, how did you see ID61 was a troublesome one?
And now you mention it…
I hall 8!) do not send back reliable instant status I have an automation running to poll them all at the same time every minute via zwave_js.refresh_value
I instantly disabled that now and will monitor…
Could this be the culprit? And if how could I overcome? Not to do the poll every minute for all 8 but do them 5-5-5-5 seconds or…?
I chimed in on the git hub issue. It looks like your well on your way to solving this - think the key to your issues is reducing traffic - avoid S0 security, decrease polling, and decrease frequency of power management reports.
After a lot of the fail to send 1/3 attempt message, the next message was something to node 61 so figured that was one causing most of the issues. I usually just look for log entries with the same Node ID over and over. Those are “chatty” nodes that usually are generating too much traffic.
I also mentioned I think you can start a new thread to ask for the best way to do automations to get the polling better, and not do them all at once. I’m not great with Home Assistant automations (use Node Red primarily) so starting a new thread will likely get you a very good response on that.
This smells like ESXi vmkusb issues, give this a shot.
You need to disable the new vmkusb module and load the older legacy USB drivers with this command:
esxcli system module set -m=vmkusb -e=FALSE
Execute this on the ESXi host via SSH.
Hi, thanks, since I stopped polling that error is totally gone… I think and thought this had nothing to do with esxi luckily!
Could just be a symptom of the vmkusb driver.
Hi, all. Although I do not have issues with "working I have issues with my zwave. I have a lot of issues with delays. Not much of a problem but it might be something blocking in the future.
I am now working with my Panic mode and that turns on/off all lights and wehn mixing all my wifi/zwave/zigbee light in groups zwave is my delaying factor. (node-red waits for a feedback before proceeeding).
While I switch on/off al lights I get these errors:
In the top there is “normal” behavior and at around 7:51 I start “all lights on” eventually all lights resond and all comes well but it takes seconds… where and what can be the issue?
2022-01-16T07:20:12.800Z CNTRLR [Node 057] The node is now asleep.
2022-01-16T07:24:01.127Z CNTRLR « [Node 020] received wakeup notification
2022-01-16T07:24:01.128Z CNTRLR [Node 020] The node is now awake.
2022-01-16T07:24:02.224Z CNTRLR » [Node 020] Sending node back to sleep...
2022-01-16T07:24:02.298Z CNTRLR [Node 020] The node is now asleep.
2022-01-16T07:31:05.008Z DRIVER Dropping message with invalid payload
2022-01-16T07:31:21.011Z DRIVER Dropping message with invalid payload
2022-01-16T07:35:05.153Z CNTRLR « [Node 021] received wakeup notification
2022-01-16T07:35:05.154Z CNTRLR [Node 021] The node is now awake.
2022-01-16T07:35:06.154Z CNTRLR » [Node 021] Sending node back to sleep...
2022-01-16T07:35:06.206Z CNTRLR [Node 021] The node is now asleep.
2022-01-16T07:46:28.926Z CNTRLR « [Node 012] received wakeup notification
2022-01-16T07:48:00.653Z CNTRLR [Node 054] Mapping unsolicited report from root device to endpoint #1
2022-01-16T07:51:20.322Z CNTRLR [Node 049] did not respond after 1/3 attempts. Scheduling next try in 500 ms.
2022-01-16T07:51:20.835Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:51:20.968Z CNTRLR Failed to execute controller command after 2/3 attempts. Scheduling next try i
n 1100 ms.
2022-01-16T07:51:27.757Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:51:29.720Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:51:30.788Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:51:33.206Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:51:33.412Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:54:12.174Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:54:13.006Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:54:13.848Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:54:18.444Z CNTRLR « [Node 026] Received update for a Supervision session
2022-01-16T07:54:27.158Z CNTRLR [Node 066] Timed out while waiting for a response from the node (ZW0201)
2022-01-16T07:54:52.522Z CNTRLR [Node 054] Mapping unsolicited report from root device to endpoint #1
2022-01-16T07:58:09.517Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:58:11.181Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:58:11.459Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:58:12.421Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T07:59:51.269Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:51.306Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:51.462Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:51.830Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:51.852Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:51.942Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:52.060Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T07:59:52.239Z DRIVER Dropping message because it could not be deserialized: The command class Netwo
rk Management Basic Node is not implemented (ZW0303)
2022-01-16T08:00:00.634Z CNTRLR [Node 049] did not respond after 1/3 attempts. Scheduling next try in 500 ms.
2022-01-16T08:00:01.849Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:00:03.098Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:00:03.679Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:00:05.173Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:00:07.394Z CNTRLR « [Node 026] Received update for a Supervision session
2022-01-16T08:00:08.262Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:00:40.858Z CNTRLR [Node 063] Mapping unsolicited report from root device to endpoint #1
2022-01-16T08:02:51.656Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:02:59.006Z DRIVER Dropping message because it could not be deserialized: MultiChannelCCCapabilit
yGet: deserialization not implemented (ZW0320)
2022-01-16T08:02:59.066Z DRIVER Dropping message because it could not be deserialized: MultiChannelCCCapabilit
yGet: deserialization not implemented (ZW0320)
2022-01-16T08:03:12.937Z CNTRLR [Node 049] did not respond after 1/3 attempts. Scheduling next try in 500 ms.
2022-01-16T08:03:13.838Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:14.012Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:15.639Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:16.456Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:16.859Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:17.512Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:20.030Z CNTRLR « [Node 026] Received update for a Supervision session
2022-01-16T08:03:21.498Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:22.459Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-16T08:03:40.446Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
EDIT: good to mention, I replaced the "non status updating cover controllers. I also stopped polling the devices as from above. So that cannot be the issue anymore…
Hi all, sorry to chime in again. I still have these vague “major” issues. When firing multiple commands I have a huge lag. Devices on battery power, sending their status just do not get seen by the controller. also I still have these regular messages:
in between:
2022-01-25T07:09:35.937Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try in 100 ms.
2022-01-25T07:09:36.044Z CNTRLR Failed to execute controller command after 2/3 attempts. Scheduling next try in 1100 ms.
2022-01-25T04:35:59.592Z CNTRLR « [Node 057] received wakeup notification
2022-01-25T04:35:59.593Z CNTRLR [Node 057] The node is now awake.
2022-01-25T04:36:00.593Z CNTRLR » [Node 057] Sending node back to sleep...
2022-01-25T04:36:00.659Z CNTRLR [Node 057] The node is now asleep.
2022-01-25T04:42:52.922Z CNTRLR « [Node 012] received wakeup notification
2022-01-25T04:52:23.237Z CNTRLR « [Node 030] received wakeup notification
2022-01-25T04:52:23.238Z CNTRLR [Node 030] The node is now awake.
2022-01-25T04:52:24.238Z CNTRLR » [Node 030] Sending node back to sleep...
2022-01-25T04:52:24.284Z CNTRLR [Node 030] The node is now asleep.
2022-01-25T04:58:37.151Z CNTRLR « [Node 029] received wakeup notification
2022-01-25T04:58:37.152Z CNTRLR [Node 029] The node is now awake.
2022-01-25T04:58:38.153Z CNTRLR » [Node 029] Sending node back to sleep...
2022-01-25T04:58:38.213Z CNTRLR [Node 029] The node is now asleep.
2022-01-25T05:02:38.029Z CNTRLR « [Node 103] received wakeup notification
2022-01-25T05:02:38.030Z CNTRLR [Node 103] The node is now awake.
2022-01-25T05:02:39.031Z CNTRLR » [Node 103] Sending node back to sleep...
2022-01-25T05:02:39.084Z CNTRLR [Node 103] The node is now asleep.
2022-01-25T05:11:24.385Z CNTRLR [Node 062] Timed out while waiting for a response from the node (ZW0201)
2022-01-25T05:14:52.546Z CNTRLR « [Node 104] received wakeup notification
2022-01-25T05:14:52.546Z CNTRLR [Node 104] The node is now awake.
2022-01-25T05:14:53.547Z CNTRLR » [Node 104] Sending node back to sleep...
2022-01-25T05:14:53.586Z CNTRLR [Node 104] The node is now asleep.
2022-01-25T05:29:41.982Z CNTRLR « [Node 014] received wakeup notification
2022-01-25T05:29:41.982Z CNTRLR [Node 014] The node is now awake.
2022-01-25T05:29:42.983Z CNTRLR » [Node 014] Sending node back to sleep...
2022-01-25T05:29:43.081Z CNTRLR [Node 014] The node is now asleep.
2022-01-25T05:36:21.260Z CNTRLR « [Node 057] received wakeup notification
2022-01-25T05:36:21.261Z CNTRLR [Node 057] The node is now awake.
2022-01-25T05:36:22.262Z CNTRLR » [Node 057] Sending node back to sleep...
2022-01-25T05:36:22.318Z CNTRLR [Node 057] The node is now asleep.
2022-01-25T05:42:51.911Z CNTRLR « [Node 012] received wakeup notification
2022-01-25T05:57:25.040Z CNTRLR [Node 066] Mapping unsolicited report from root device to endpoint #1
2022-01-25T05:58:13.218Z CNTRLR [Node 066] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:00:52.947Z CNTRLR « [Node 021] received wakeup notification
2022-01-25T06:00:52.948Z CNTRLR [Node 021] The node is now awake.
2022-01-25T06:00:53.949Z CNTRLR » [Node 021] Sending node back to sleep...
2022-01-25T06:00:54.009Z CNTRLR [Node 021] The node is now asleep.
2022-01-25T06:09:11.509Z CNTRLR « [Node 103] received wakeup notification
2022-01-25T06:09:11.509Z CNTRLR [Node 103] The node is now awake.
2022-01-25T06:09:12.511Z CNTRLR » [Node 103] Sending node back to sleep...
2022-01-25T06:09:12.577Z CNTRLR [Node 103] The node is now asleep.
2022-01-25T06:15:51.732Z CNTRLR « [Node 104] received wakeup notification
2022-01-25T06:15:51.732Z CNTRLR [Node 104] The node is now awake.
2022-01-25T06:15:52.733Z CNTRLR » [Node 104] Sending node back to sleep...
2022-01-25T06:15:52.780Z CNTRLR [Node 104] The node is now asleep.
2022-01-25T06:22:39.938Z CNTRLR « [Node 052] received wakeup notification
2022-01-25T06:22:39.939Z CNTRLR [Node 052] The node is now awake.
2022-01-25T06:22:40.940Z CNTRLR » [Node 052] Sending node back to sleep...
2022-01-25T06:22:40.988Z CNTRLR [Node 052] The node is now asleep.
2022-01-25T06:26:47.708Z CNTRLR [Node 066] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:27:28.227Z CNTRLR [Node 066] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:30:51.112Z CNTRLR [Node 063] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:32:17.922Z CNTRLR [Node 054] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:34:48.788Z CNTRLR « [Node 039] received wakeup notification
2022-01-25T06:34:48.789Z CNTRLR [Node 039] The node is now awake.
2022-01-25T06:34:49.791Z CNTRLR » [Node 039] Sending node back to sleep...
2022-01-25T06:34:49.891Z CNTRLR [Node 039] The node is now asleep.
2022-01-25T06:36:42.098Z CNTRLR « [Node 057] received wakeup notification
2022-01-25T06:36:42.099Z CNTRLR [Node 057] The node is now awake.
2022-01-25T06:36:43.100Z CNTRLR » [Node 057] Sending node back to sleep...
2022-01-25T06:36:43.152Z CNTRLR [Node 057] The node is now asleep.
2022-01-25T06:38:39.861Z CNTRLR « [Node 044] received wakeup notification
2022-01-25T06:38:39.861Z CNTRLR [Node 044] The node is now awake.
2022-01-25T06:38:40.861Z CNTRLR » [Node 044] Sending node back to sleep...
2022-01-25T06:38:40.902Z CNTRLR [Node 044] The node is now asleep.
2022-01-25T06:42:50.897Z CNTRLR « [Node 012] received wakeup notification
2022-01-25T06:43:34.427Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T06:43:56.768Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:03:26.456Z CNTRLR [Node 054] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:07:06.717Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:08:02.157Z DRIVER Dropping message with invalid payload
2022-01-25T07:08:03.170Z DRIVER Dropping message with invalid payload
2022-01-25T07:08:18.143Z DRIVER Dropping message with invalid payload
2022-01-25T07:09:35.937Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2022-01-25T07:09:36.044Z CNTRLR Failed to execute controller command after 2/3 attempts. Scheduling next try i
n 1100 ms.
2022-01-25T07:09:47.598Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:09:47.641Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:09:47.807Z CNTRLR [Node 048] Mapping unsolicited report from root device to endpoint #1
2022-01-25T07:10:04.056Z CNTRLR [Node 021] treating BasicCC::Set as a report
2022-01-25T07:15:46.987Z CNTRLR « [Node 103] received wakeup notification
2022-01-25T07:15:46.987Z CNTRLR [Node 103] The node is now awake.
2022-01-25T07:15:47.987Z CNTRLR » [Node 103] Sending node back to sleep...
2022-01-25T07:15:48.062Z CNTRLR [Node 103] The node is now asleep.
2022-01-25T07:16:50.879Z CNTRLR « [Node 104] received wakeup notification
2022-01-25T07:16:50.879Z CNTRLR [Node 104] The node is now awake.
2022-01-25T07:16:51.881Z CNTRLR » [Node 104] Sending node back to sleep...
2022-01-25T07:16:51.939Z CNTRLR [Node 104] The node is now asleep.
2022-01-25T07:20:23.952Z CNTRLR [Node 021] treating BasicCC::Set as a report
2022-01-25T07:20:35.658Z CNTRLR [Node 048] Timed out while waiting for a response from the node (ZW0201)
2022-01-25T07:20:42.251Z CNTRLR [Node 021] treating BasicCC::Set as a report
I recall this being a bug that was fixed, what version of zwavejs?
It is there for a while already (previous versions).