I have the Zooz ZST10 stick and have had almost no problem aside from the know issue that kicked up around the soft reset a few months ago.
I’m sure it will get figured out over time and the gen7 will be as rock solid as the gen5 series. But it’s well worth the $50 for the gen5+ stick in the meantime to not have to tinker with it anymore. Transferring was super easy using the npx nvmedit convert tool. My plan is to use the gen7 as a backup in case the gen5+ fails.
It’s not 50$ that’s bothering me as it’s re-interviewing all devices
Just wanted to add that I am not using a 700 stick, and have been having the same issues. I am using a HUSBZB-1 stick and have been for years. I recently tried to add a Zooz Q sensor and 2 additional Zooz Zen77 dimmers, but none of them would stay alive. My old Zen77 and GE switches never go dead, they were installed more than a year ago and never had any issues. The new dimmers were about 40 feet from my hub and my old switches. I could ping them back alive, but it kept screwing up my automations. When the Q sensor was working, it was one of my least reliable and slowest motion sensors.
I finally just gave up and replaced the 2 new dimmers with the inovelli blue switches. Zigbee has been rock solid for me.
This has a few issues that might not be related. Note that zwave is relatively low bandwidth so that zooz q sensor might have been too chatty. Also 40ft is pretty far for the network and you might have just had a distance problem, even with the mesh.
In this case the 700/800 series might be a decent upgrade for you, and make sure to be pairing with the PSK or SmartStart. If you have any gen1 zwave devices you may also want to consider their placement or retiring/upgrading those devices.
Here’s a thread for a Blueprint to ping unresponsive nodes on zwavejs2mqtt
Anyone have the ability to covert this to ZwaveJS?
It also works for zwave js
Could you describe briefly how to transfer all info from a 700 to a 500 stick. I just moved 66 zwave devices from my old Fibaro HC2 controller to an Aeotec 700, and z-wave just is a disaster, dead nodes and very slow response.
I understood from your message that I can transfer the 700 info to a (hopefully more stable) 500 stick. Do you have to transfer nodes.json and nvm (and convert nvm from 700 to 500)?
Transferring from my HC2 to the 700 took me a full day, want to prevent that if I start using a 500 stick.
Is there a way to expand the sensor to actually show the dead devices? Maybe using something like auto-entities?
This no longer works, because zwave-ping.yaml is not a valid name. When you Check Configuration it shows an error about an “Invalid slug” and suggests a rename to zwave_ping.yaml, which does work.
hello,
having same issues, is there some final / reviewed/ tested solution for that bug?
Thanks!
What or which bug are you referring to?
that some of my zwave devices got into the DEad status ; and i have to ping them manually… ie script that can monitor DEAD devices and in case dead one is detected just ping it - to make it online again.
thanks!
The problem is the chip used in the Aeotec gen-7 stick.
The manufacturer is SiliconLabs, and even the latest release notes for their development kit (from which Aeotec derives the firmware for the gen-7) mentions a “non-resolved issue” with the software, ie software jams fro time to time, making the device unresponsive.
This bug is already present for some time now, but never resolved.
Best solution if you want to keep zwave: use a gen5+ stick, no problems whatsoever.
Gotcha - just wanted to make sure you weren’t referring to something with the automation script. For the root cause, the other reply answered that but don’t worry - when it’s fixed, I expect the cheers from the masses will be heard across the globe. But really, just follow this thread and you’ll be kept in the loop.
hey i have that one and still 2 my qubino devices switch randomly to dead state.
as it occurs i should use that automation script if possible.
any idea here? thanks!
Sorry - what are you asking?
Interesting post!
Apart from the discussion about ZwaveJS and 700 or 500 series. I do have an Aeotec 500+ Zwave stick and it is running flawless for over 5 years with 70 devices (I use only zwave+ devices). However, there seems a difference in quality; Some devices seems to drop out once a year (like my Heatit thermostat) and I do have one that will drop out every month (ECO-DIM dimmer). Maybe that’s because of interference, but I think ECO-DIM is not quite solid). However, not often, devices might become not responding or seen as dead. That seems generic problem with technology: it can fail sometimes
I use this package to watch that and automatically try to repair. The package is containing the sensor, a timer and a script . it is all based on the work that nice people have shown here in this topic. However I have modified it to run only when necessary. Therefore the script get’s triggerd for the first time based on the state of the sensor becoming anything else than 0. After running, and thereby pinging, I start a timer. The script will end and since the sensor remains > 0 it will only be triggered when the timer expires. This way, the script is not running in the meanwhile, waiting for a delay. Also it writes a warning to the system log when nodes are pinged.
system_dead_node_handling:
# Package based on https://community.home-assistant.io/t/automate-zwavejs-ping-dead-nodes/374307
# 2024-05 PDW
template:
- sensor:
- name: "Dead ZWave Devices"
unique_id: dead_zwave_devices
unit_of_measurement: entities
state: >
{% if state_attr('sensor.dead_zwave_devices','entity_id') != none %}
{{ state_attr('sensor.dead_zwave_devices','entity_id') | count }}
{% else %}
{{ 0 }}
{% endif %}
attributes:
entity_id: >
{{
expand(integration_entities('Z-Wave JS') )
| selectattr("entity_id", "search", "node_status")
| selectattr('state', 'in', 'dead, unavailable, unknown')
| map(attribute="object_id")
| map('regex_replace', find='(.*)_node_status', replace='button.\\1_ping', ignorecase=False)
| list
}}
timer:
dead_node_ping:
icon: mdi:robot-dead-outline
automation:
- id: '029098239872316'
alias: Ping Dead ZWave Devices
mode: single
max_exceeded: silent
trigger:
# Initial trigger when sensor becomes not 0
- platform: state
entity_id: sensor.dead_zwave_devices
from: '0'
# Timer triggered as long as the problem exists
- platform: event
event_type: timer.finished
event_data:
entity_id: timer.dead_node_ping
action:
# Stop when the problem has been solved
- condition: template
value_template: '{{ states("sensor.dead_zwave_devices") |int(0) > 0 }}'
- repeat:
count: '{{ states("sensor.dead_zwave_devices") |int(0) }}'
sequence:
- service: button.press
target:
entity_id: '{{ state_attr("sensor.dead_zwave_devices","entity_id") }}'
# Add notice to system log (optional)
- service: system_log.write
data:
message: "ZWave Dead node detected and pinged: {{ state_attr('sensor.dead_zwave_devices','entity_id') }}"
level: 'warning'
# Keep trying, but use a timer and avoid this script eating your cpu
- service: timer.start
target:
entity_id: timer.dead_node_ping
data:
# set timer for one minute (change as you like)
duration: "00:01:00"
Let me know what you think of it…