Techniques to not flood the network when switching off groups?

So when I trigger my “lock up” switch it shuts off everything and it’s obvious the zwave network is just flooded because some lights take up to 10 seconds to shut off and new triggers don’t happen until they’ve all been resolved. For example, as I’m leaving I trigger it and some lights are still on as I get downstairs and the motion detection doesn’t work because the message hasn’t reached the hub yet because of the flood in the network. I’m hoping that it’s just because they are all sent at once so it’s overwhelming and that maybe if I add delays it might help, basically instead of shutting off a group start by shutting off a light, wait 200ms, shut off another, etc. Would this be better? Are there things I can do to fix this?

The ZWave protocol itself supports Group and Scenes as far as I know,

  • Groups - a collection of individual devices controlled as a group. For instance they can all be turned On/Off by one button or action.
  • Scenes - enable a single button to send different commands to different devices. Making several different things happen at the same time.

I’m not sure how much of that Home-Assistant supports though. It seems like there might be some implementation of groups, and maybe scenes too? ButI’ve not tried anything like it myself.

As a test try creating a script that turns off each entity individually and see if that works better. At one point I was suspecting that specifying the entities separately worked better than calling a really large group (specifically for zwave).

I never did enough testing to determine if it was in fact true, but I had to laugh to myself when I read this post because I have also thought that zwave doesn’t play well with large groups.

I did try that from node red and it didn’t make a huge difference but again they were basically hitting the network all at the same time. I think HA does this behind the scenes anyway with groups. I’ll probably try an experiment that I just delay them a little bit and maybe try the built in groups like @Silicon_Avatar suggested as that would probably be pretty fast.

Does your zwave network always have large delays when you run the lock_up routine? If it is intermittent then checking the zwave log file may help show the issue.

I used to have terrible zwave delays on a regular basis. For me a major factor was my Aeotec MultiSensor6 devices (I have three). I had polling enabled, mostly because I thought it was required, once I removed polling, things seemed to improve quite a bit.

Not really, if I trigger a light it’s pretty much instant normally. The delay only comes when I trigger a bunch, now it could also be possible that one of those nodes is a problem which makes this an issue. Maybe I’ll try to remove them one by one too and see if that changes.

Let me try again:

Does your zwave network always have delays when you run your “LOCK UP” routine. The one you referenced earlier as problematic.

Is the “LOCK UP” routine always slow or is it just slow sometimes?

It’s always, delayed. So, I hit the button, some lights go off instantly and others take much longer, longer than they would if I shut them off individually. In addition to this delay, new events (notable motion detection) does not work in realtime but it appears those messages are queued up on the network because they will eventually trigger, just much later after the previous turn off messages have cleared from the initial lock up call.

From a pure RF perspective this is weird so it must be something with the zwave protocol and how these messages are passed around “blindly” until they reach the appropriate node.

There are a total of 13 entities (lights, switches) in my two groups I shut off with the lockup command plus a tv and sonos. I’ve been meaning to read the source in HA to trace this but there are so many modules it’s pretty hard to follow the flow.

So figured I would try a quick test. I can change the delay on the left, 250 seems to be good. This shuts the lights off in under ~3 seconds. I just ran both three times each and this one took at most 4 where the lockup took up to 15. Now, this tells me there is an issue but now what’s causing it is still an unknown. My next step is to probably look at the logs after something like this to see when events are fired in HA and when they are fired/received in the zwave log.

Incase it’s not obvious from the flow, each entity is shut off followed by a delay, then the next, etc. I did it like this so I could visually watch them shut off, a loop would be much smaller. :slight_smile:

I tried to achieve the same thing here:

Different wording though, didn’t catch on