Running zwave-js: Driver Version: 12.0.3 Server Version: 1.32.1
on Home Assistant 2023.8.3 Supervisor 2023.11.6 Operating System 10.3 Frontend 20230802.1 - latest
Installed 7 brand new Zooz Zen 71 & 72 switches. they will randomly not respond to automations, or dashboard control and show in a ‘Dead’ state. Manually toggling the switch reactivates, but its very frustrating.
The other issue is latency. Sometimes it takes 15-30 seconds from the scene toggling for he switch to respond. I had a lot more switches in my prior house with no issues at all. Any thoughts? Its driving me nutty!!
I would definitely not be comfortable using this version of Z-Wave JS. v12 added a significant behavior change to how dead nodes and controller lockups are handled, and it revealed a ton of issues. There have been a number of followup fixes. The current version is v12.4.1 and is supported by the Core add-on. https://zwave-js.github.io/which-version/
In this new house, they are all 800s. Old place had a mix.
I’ve read some issues with 11.2 and ZWave. I tend to stay at what works for quite a while.
The HUSb1 dongle is 10 feet to the closet ZooZ switch. I do have issues with the Zigbee network as well.
I dont know if it matters, but i did take the setup from the old hose, and restored it on new hardware in the new house. Deleted all the old switches. Starting to htink maybe trash the whole thing and re-setup from scratch?
@freshcoast,
Sadly, the upgrade didnt help. yes, I have the zigbee/zwave dongle on an extension cord plugged into a powered USB hub (so theoretically USB2 not 3) but still seeing issues. Switches go offline after recvg a command from an automation or scene.
added 5 more switches this morning, all 800 series ZooZ, and am having issues getting them to properly finish inclusion. They bounce from Alive to Dead and only 1 after re-interview, stays Alive and Ready so it can be controlled.
As i mentioned, I did carry this configuration from one house to a new one. Should I just delete the ZWave JS, and reinstall to get a fresh DB? I assume I will have to manually delete all the existing ZWave devices… I am unable to get Exclusion to succesfully remove those new switches that are reporing not ready
I’ve never had great success with the backup/restore option. As much of a pain as it is resetting everything, excluding, and then including, that’s what I would try next.
Did that after i moved the dongle even further from things like lights. it got a little better. Im noticing the 3 Zen72 switches i was having trouble with earlier, all report RTTs in the >2200ms range! Some are reporting Zwave Plus version 2 others say none but have S2 Authenticated listed (in all cases, it asked for the PIN). The two Zen77s i added finished inclusion and the interview in 45 seconds and report a RTT of 24.8ms yet they are much further away
I wish i could ‘see’ the mesh like i can with Zigbee. I fixed the Zigbee network when i discovered a USB repeater had died. Removed it from the Zigbee mesh and immediately had a better response
z-wave js ui has a network map, recently migrated to that as a control panel using the web server. You can still do most things from the z-wave js integration as well. It was a pain including everything again, but well worth it imo
another thing that was pretty much a game changer is this dead node revive automation someone on here wrote, I forget who wrote it but kudos to them
alias: Maintenance | Z-Wave | Dead Node Revive
description: Try to revive Z-wave nodes that are shown as dead by the controller
trigger:
- alias: When there are Z-wave nodes shown as dead for 10 seconds
platform: template
value_template: >-
{{ expand(integration_entities('Z-Wave JS')) | selectattr("entity_id",
"search", "node_status") | selectattr('state', 'in', 'dead, unavailable,
unknown') | map(attribute='entity_id') | list | length() > 0 }}
for:
hours: 0
minutes: 0
seconds: 10
- alias: When it's the top of the hour trigger this automation
platform: time_pattern
hours: /1
condition:
- alias: Check that there are Z-wave nodes listed as dead
condition: template
value_template: >-
{{ expand(integration_entities('Z-Wave JS')) | selectattr("entity_id",
"search", "node_status") | selectattr('state', 'in', 'dead, unavailable,
unknown') | map(attribute='entity_id') | list | length() > 0 }}
action:
- alias: Repeat the action of trying to ping each dead node in order to revive it
repeat:
for_each: >-
{{ expand(integration_entities('Z-Wave JS')) | selectattr("entity_id",
"search", "node_status") | selectattr('state', 'in', 'dead, unavailable,
unknown') | map(attribute='entity_id') | list }}
sequence:
- alias: Press the Z-wave ping button to wake the node
service: button.press
data: {}
target:
entity_id: >-
{{ repeat.item | replace('sensor.','button.') |
replace('_node_status','_ping') }}
- alias: >-
Wait 2 seconds in between each pinged node to prevent flooding of
network
delay: "00:00:02"
- alias: Wait 5 minutes in case this automation is called repeatedly
delay: "00:05:00"
mode: single
In case anyone finds this thread, I can share the ‘solution’. I contacted Zooz and i must say the support was great. They suggested that the 3.10 FW the Zen72 & 72 switches ship with have a bug that causes this instability. I updated them to 3.30, un-enrolled them, performed factory reset and re enrolled and the RT times as well as number of dropped packets went back to ‘normal’. I also am not seeing switches go Unavailable or dead randomly anymore.
Still not sure why you have to go through the remove, reset, re-enroll. Unless that is to clear out the HA Z-Wave db entries
You probably don’t have to. As zwavejs will reinterview the switch after the upgrade. I upgraded a zen76 to 3.30 without doing that with no issues. That said, as a help desk they want to give you a process that has the highest probability of success - and I’d assume they have tested that process to verify it works.