Automate ZwaveJS Ping Dead Nodes?

Here’s an updated version that calls button entities instead of relying on the zwave_js.ping service. Creating a synthetic sensor in order to trigger a button presses seems weird, but it seems to work (even with multiple entities). If anyone has a better way to do this, I’m all ears.

template:
    - sensor:
      - name: "Dead ZWave Devices"
        unique_id: dead_zwave_devices
        state: >- 
            {% set exclude_filter = ['sensor.700_series_based_controller_node_status'] %}
            {{
              expand(integration_entities('Z-Wave JS') )
              | rejectattr("entity_id", "in", exclude_filter)
              | selectattr("entity_id", "search", "node_status")
              | selectattr('state', 'in', 'dead, unavailable, unknown')
              | map(attribute="object_id")
              | map('regex_replace', find='(.*)_node_status', replace='button.\\1_ping', ignorecase=False)
              | list
            }}
alias: Ping Dead ZWave Nodes
description: ''
trigger:
  - platform: state
    entity_id:
      - sensor.dead_zwave_devices
condition:
  - condition: template
    value_template: '{{ states.sensor.dead_zwave_devices.state != "[]" }}'
action:
  - service: button.press
    target:
      entity_id: '{{ states.sensor.dead_zwave_devices.state }}'
mode: single

Here’s what happens after unplugging a device and waiting for it to be marked dead:

3 Likes

Im sorry still a bit of a noob, but where do you put this code in the yaml configs?

Template section in your configuration.yaml.

Nice and sweet!

However, you Could simplify the template to:

             {{
              states
              | selectattr("entity_id", "search", "node_status")
              | selectattr('state', 'in', 'dead, unavailable, unknown')
              | map(attribute="object_id")
              | map('regex_replace', find='(.*)_node_status', replace='button.\\1_ping', ignorecase=False)
              | list
            }}

and Should use the

{{ states('sensor.dead_zwave_devices') }}

format in both lines in the automation to prevent startup issues on these templates

Can you clarify on the syntax at what you’re suggesting? I’m trying to get this setup, but am struggling with the YAML.

Here is what I have, but it’s obviously incorrect when trying to input your
{{ states('sensor.dead_zwave_devices') }} suggestion.

- alias: Ping Dead ZWave Nodes
  description: ''
  trigger:
  - platform: state
    entity_id: sensor.dead_zwave_devices
  condition:
  - condition: template
    value_template: '{{ states('sensor.dead_zwave_devices') != "[]" }}'
  action:
  - service: button.press
  target:
      entity_id: '{{ states('sensor.dead_zwave_devices') }}'
  mode: single

@Mariusthvdb - want to reach out and see if you could comment, or paste the whole automation, on what you proposed for the proper syntax. Thank you :slight_smile:

did get a ping so sorry for the response lag…

you use single inner and outer quotes in both the value_template and the entity_id line.

Use double outer quotes "double-quote" or multi-line notation using > to solve that syntax issue

eg

value_template: "{{ states('sensor.dead_zwave_devices') != [] }}"

or my preferred style:

value_template: >
  {{ states('sensor.dead_zwave_devices') != [] }}

same for the entity_id

Can you please post the final iteration of the all the parts needed to make this work? I am having the same issue with my ZST10-700. Devices randomly going dead.

I agree it can be confusing when people only post part of their configuration, especially since yaml is such a non-descriptive format. I’ll try to elucidate.

I put all of z-wave ping handling into a “package” file, that I load from my configuration.yaml.

In the configuration.yaml, near the top:

group: !include groups.yaml
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml

# added to the default includes to pull in packages
homeassistant:
  packages: !include_dir_named packages

And in the home-assistant directory (where the configuration.yaml file resides), create the directory “packages”. I put a file in the packages directory I called “zwave-ping.yaml”, with the following contents:

template:
  - sensor:
      - name: "Dead ZWave Devices"
        unique_id: dead_zwave_devices
        unit_of_measurement: entities
        state: >
          {% if state_attr('sensor.dead_zwave_devices','entity_id') != none %}
            {{ state_attr('sensor.dead_zwave_devices','entity_id') | count }}
          {% else %}
            {{ 0 }}
          {% endif %}
        attributes:
          entity_id: >
            {% set exclude_filter = ['sensor.700_series_based_controller_node_status'] %}
            {{
              expand(integration_entities('Z-Wave JS') )
              | rejectattr("entity_id", "in", exclude_filter)
              | selectattr("entity_id", "search", "node_status")
              | selectattr('state', 'in', 'dead, unavailable, unknown')
              | map(attribute="object_id")
              | map('regex_replace', find='(.*)_node_status', replace='button.\\1_ping', ignorecase=False)
              | list
            }}

automation:
  - id: ping_dead_zwave_devices
    alias: Ping Dead ZWave Devices
    description: ''
    trigger:
      - platform: state
        entity_id:
          - sensor.dead_zwave_devices
    condition:
      - condition: template
        value_template: >
          {{ int(states.sensor.dead_zwave_devices.state) > 0 }}
    action:
      - service: button.press
        target:
          entity_id: >
            {{ state_attr('sensor.dead_zwave_devices','entity_id') }}
    mode: single

Note the sensor can be added to your dashboard and will show a count of dead devices:

You can enable or disable the automation from the settings, but you cannot edit the automation using the visual editor. You can change that by putting the automation part in your automations.yaml file if you prefer.

If you disable the automation, you’ll see the count go up and down over time. Since enabling the automation, I’ve not had any persistent dead nodes.

22 Likes

F***ing AWESOME!! Thank You.

zwave_ping.yaml :slight_smile:
Thanks for this.

1 Like

I simply repackaged the hard work of other folks in this thread, but I’m glad it was helpful.

2 Likes

It seems this no longer works. Had a node go dead shortly after I implemented the code and the automation failed to trigger. When I attempted to run the automation manually, nothing happened. Not sure what is going on. I’m running Home Assistant 2022.7.5.

1 Like

I just implemented this as per bretton.wade’s package just above. Perfect timing since I had one node needing pinging.

I was able to run the automation manually to clear the node and will monitor the automation.

I note, however, that at least in V2022.7.6 there is no longer an entity for the master controller, so I deleted these two lines:

{% set exclude_filter = ['sensor.700_series_based_controller_node_status'] %}
and
| rejectattr("entity_id", "in", exclude_filter)
2 Likes

I updated my HA, but the automation still works for me. This is what you love to hear, right?

Sadly, the ping only resolves one type of problem in the ZWave 7 universe. Try connecting to your ZWaveToMQTT.js web interface and see if the deice you want to ping is having some other problem?

Caveat: this package does not actually fix your nodes, so when they do go offline, they don’t respond until the ping brings them back. For some automations, this means things like a light switch stays on. I’ve begun to implement a shadow that tries to check if a switch is in the state HA expects it to be… I hate that this is necessary. I’ve also changed the automation so that it runs every 15 seconds instead of in response to a new dead entity. I found that it was unreliable the other way.

I can say with complete certainty that my nodes are being marked as either dead or unavailable. It’s definitely caused by the known 700 series device bug and I’ve submitted logs to the devs in the hopes they can identify the cause. I’ve done everything necessary in an attempt to mitigate the problem (firmware upgrade, usb cable extension, etc.) Sadly, until a fix is found, tools like this are our only available workaround.

FWIW, I am able to “revive” the nodes by manually running the Ping Button feature. I suspect, as you’ve already pointed out, the expediency of my current automation may be a causal factor which interferes with the automations efficacy.

Quite by accident, I discovered that an easy way to trigger the automation is to force the Zwave driver to restart. Changes made to the Zwave settings panel in Z2M result in a restart of the driver. It was during such a restart that I observed the automation attempting to run, which it probably shouldn’t do since the entire Z-wave environment is being completely reset at that point in time. I think a delay of some sort, perhaps an “if dead for X minutes” condition, might help offset any conflicts that could arise.

Agreed - I follow the zwavejs slack channel and have submitted many debug logs for them to send to SiLabs before the latest SiLabs firmware was released 5 months ago. While the dead zwave devices being “resuscitated” by pinging has reduced dramatically since the 7.17.2 stick firmware, it still happens (just not daily like before).

There is still the entity for the stick, but it’s now disabled by default. Good catch on calling that out though, it broke my automation from working as well.

ZST10 700, current firmware.

I think instead of chasing my own tail with the ongoing 700-series issues (multiple dead nodes every week), I’m going to bite the bullet and move every Z-Wave device over to a Gen5 stick. I can’t think of any real-world benefit offered by the 700 that’s worth the issue of devices dropped off the network.

While this won’t solve every issue, it’ll solve the most frequent. Some devices, including some from Zooz like Zen26 and Zen27 have other firmware issues that cause them to crash and become totally unresponsive until they’re power cycled (at the breaker). This happens much less frequently, and typically from a too-brief grid power failure or brownout.

After switching to a Gen5 controller, would all existing entities/devices be lost?

I was getting ready to dump ZWave entirely - literally searching Amazon for equivalent WiFi devices - when I did some more Googling and found this thread. Pinging dead nodes from an automation would be a lot easier than physically replacing all the devices with wifi.