I’m seeing the exact same issue using the ZAC93 with a variety of switches (GE, Enbrighten, Zooz). I get a dead nodes every few days and pinging works.
Pinging everything once overnight takes care of most problems. I also have “Alexa, run ping”.
My automations that fail also ping,
I agree that something zwave JS specific is the issue. I have a zen 32 that goes unavailable several times a week. This never happened with Homeseer. I suspect Homeseer implements some z-wave management functions that check and wake up unavailable nodes. Why isn’t zwave JS periodically trying to wake up nodes it has marked as unavailable?
Can anyone provide me with an automation with the corresponding script… With me, devices are constantly going offline…
It would be great if someone could provide something working. The thread has now become a bit confusing…
Asking again for helping ……
After making some tweaks to the automation, this is working for me now. The problem with the original automation is that it would run as soon as the device became available and never re-run. So, in my case, it would try to ping the device when it lost power but not retry once power was restore. This mod will ping the device(s) every 30 seconds.
Automation
alias: Ping Dead ZWave Devices
description: ""
trigger:
- platform: state
entity_id:
- sensor.dead_zwave_devices
condition:
- condition: template
value_template: |
{{ int(states.sensor.dead_zwave_devices.state) > 0 }}
action:
- repeat:
while:
- condition: template
value_template: |
{{ int(states.sensor.dead_zwave_devices.state) > 0 }}
sequence:
- service: button.press
target:
entity_id: |
{{ state_attr('sensor.dead_zwave_devices','entity_id') }}
- delay:
hours: 0
minutes: 0
seconds: 30
milliseconds: 0
mode: single
Template
template:
- sensor:
- name: "Dead ZWave Devices"
unique_id: dead_zwave_devices
unit_of_measurement: entities
state: >
{% if state_attr('sensor.dead_zwave_devices','entity_id') != none %}
{{ state_attr('sensor.dead_zwave_devices','entity_id') | count }}
{% else %}
{{ 0 }}
{% endif %}
attributes:
entity_id: >
{% set exclude_filter = ['sensor.700_series_based_controller_node_status'] %}
{{
expand(integration_entities('Z-Wave JS') )
| rejectattr("entity_id", "in", exclude_filter)
| selectattr("entity_id", "search", "node_status")
| selectattr('state', 'in', 'dead, unavailable, unknown')
| map(attribute="object_id")
| map('regex_replace', find='(.*)_node_status', replace='button.\\1_ping', ignorecase=False)
| list
}}
Thx , I‘ll test ….cross the fingers
Note that you may have to rename the integration in the template.
For example:
expand(integration_entities(‘zwave_js’) )
After over a year, I can somewhat confidently say this isn’t a Z-Wave 700 issue, this is a Home Assistant Z-Wave JS issue specifically.
I can’t find a single mention of this type of dead node issue for another automation platform I used for years prior. Also, every web search I do on the subject involves someone experiencing this running Home Assistant.
There are definitely some hardware/firmware issues with some products, but the “ping brings it back” dead node issue points only to HA, IMO.
It does appear that way.
Slightly unrelated to the problem plauging this thread but recently I dealt with (for the second time) an entirely unusable zwave network. The kicker is that it would come back for awhile if I restarted the zwavejs add on. So not touching the hardware. There is something not quite clicking but admittedly I don’t know how to provide the devs any direction.
I’ve tried numerous versions of this automation but they all seem to work for a bit and then stop.
Just changed it all to the one @FriedCheese posted (after updating the integration name) and it worked once but now it’s not updating.
Using Dev tools I can see the sensor should have updated since the attribute script returns two items
Then looking at the actual entity, it doens’t even show those items in the array
The “template listens for” part looks good as I can see the node status entities there. Unsure why it’s not updating.
I went and ‘pinged’ the bathroom light myself and the sensor still doesn’t update (looks the same as above) but the template editor did update automatically:
*note that the screenshot cut off the bottom and there is a long list of ‘listens for’
Have you reloaded YAML since updating the template? Only thing I can think of. It’s worked reliably for me.
The integration name appears to work in either format.
Yeah, I restarted HA after making the change yesterday. I just now did a reload of YAML (so not full restart) and it did noticed that one dead node and pinged it. Will keep an eye on it to see if I can find a pattern.
That’s really odd that yours works with both integration name and mine doesn’t.
So you see all your zwave entites under ‘listens for’ here:
You’re not the only one. I also have a “hit or miss” firing on a dead node.
Has anyone looked carefully to identify the situation/time/event/circumstances at which nodes go dead?
In my case I’ve been able to pin-pint precisely the situation responsible for nodes going dead - it only happens when I issue a group of Z-Wave messages. Specifically turning off groups of devices, in my case all lights and switches (24 devices or so).
I have the group OFF as a shortcut that I issue nightly from a scene controller. In the past I was able to see that nodes had gone dead about the time that group OFF was issued. After setting up this automation I can see that the dead nodes entity is populated at precisely the same time.
Last night 24 dead nodes reported (what!?) and testing today, 1 each time I triggered the automation. It worked each time to bring them back online.
I had three zwave devices in it, but they no longer exist at all.
I deleted all of them yesterday… Since then, no more dead nodes of devices that are still in my network…
Let’s see …
I know I’ve had issues with turning on a group that has all my light switches, but that usually just manifests as a few devices not turning on. I had it happen today but the template didn’t pick up any dead nodes after.
I know the last platform I was on had a metering option specifically for this that would space out the Z-Wave commands in a scenario like this. The idea being to prevent flooding the network. Maybe something like that is needed here.
This works great so far. Thank you. I did have to remove my combo stick node status entity so it wouldn’t continuously get called, which others mentioned above.
My question is how can I track this (besides logging or sending a notification). I want to be able to see a history in HA of this happening. Do I need to set a sensor value? I’m new to this automation stuff.
These are working great for me, thanks for the updates! Have been struggling to get a reliable automation on this. I also added an item to the action sequence that send a notification, in case anyone is interested (easy enough to make it work with the other notification services, I just use Slack):
- repeat:
for_each: "{{ state_attr('sensor.zwave_dead_devices','entity_id') }}"
sequence:
- service: notify.slack
data_template:
title: "ZWave Dead Node"
message: "New ZWave dead node: {{ state_attr(repeat.item, 'friendly_name')|replace(' Ping', '') }}. Pinging now."
target:
- '#debug'
Note that the notification action has to come first, before the repeat action to ping the node(s).