What could be the reason for delays in successive events?

The components which are directly relevant to my problem:

  • HA v. 0.114.4
  • AppDaemon
  • wall RF433 switches
  • … which signals are caught by a Sonoff RF Bridge
  • … that sends an MQTT message
  • … handled by AppDaemon
  • … to switch on/off Sonoff Basic switches via HA

I totally understand that such a complex setup can induce some lag between the moment I press the wall switch and the moment the light is on or off.

I have not witnessed serious lags (they are there, but seem to be ~500ms, completely bearable) - when pressing the wall switch that handles a single entity (a single switch).

What I see, though, is that when a have a quick series of messages sent to HA from AppDaemon, they seem to be throttled (for a lack of better word: the time between the switches of the several devices is different and the more switches I go though, the slower to react the last ones are.

To be more precise: I have a group of switches (not lights) and to toggle them, I have to iterate over the components of the group, calling the entity one by one. This means that if group.room has 10 entities, I have to call the “switch off” service on each of them one by one. I cannot toggle group.room.

The effect I see, generally speaking, is that the time of reaction between device n and n+1 increases as n increase. It really feels as if the calls were throttled.

This is still acceptable, but I would like to understand whether this effect is expected because of some design

I am an amateur developer and I understand that the lag can come from various places (the messages could be sent with a delay by AppDaemon for instance (which is not the case, but that’s an example of possible cause)). What I probably mean to is to understand whether HA has some limiting capacities built-in (and if so - whether they are tunable))


Note: I know that I could turn my switches into lights via the switch platform (and have the ability to toggle a whole group - which may or may not help) but I want to keep that setup to be able to consistently edit devices and entities though the Lovelace UI.

@tom_l: you modified the question to make it an AppDaemon one but this is not specifically an AppDaemon question (I just mentioned the whole stack for completeness).

I would say that it is rather a pure HA one - but there is no explicit category for that (besides “Configuration”)

AppDaemon features pretty strongly in your post. But I’ll move it back if you feel that’s where it belongs.

Yes, because this is where the events are sourced from - but the essence of my question is

What I probably mean to is to understand whether HA has some limiting capacities built-in (and if so - whether they are tunable))

I should have probably made it clearer in my question but never mind - I will see what comes from that (especially that AppDeamon users may have witnessed something similar). Thanks for following-up.

I believe the only limiting factor attributable to Home Assistant is the one second resolution of the event loop, and it is not adjustable. This limits how quickly a Home Assistant event listener can react to an event (up to one second delay). I am not sure if this is applicable to your iteration.

Why not?

It could very well be, thank you. I will be looking at the precise timings of the fired events on HA side.

Do you know whether homeassistant.turn_on (or off) is synchronous? That is, does it return immediately to the caller and runs in a thread of its own, or does it wait for the action to be finished to hand back?

Please see my question How to toggle a group of switches as a whole? and the solution I finally use (How to toggle a group of switches as a whole? - #6 by WoJWoJ) – I noticed the slow down appearing afterwards but this may or may not be related.

It may indeed be this. Per my previous message, I timed the calls to a group of four entities in this piece of code:

def toggle_group(self, group):
        state = self.hass.get_state(entity_id=group, attribute='all')
        operation = self.hass.turn_on if state['state'] == 'off' else self.hass.turn_off
        for entity in state['attributes']['entity_id']:
            operation(entity)

This means that the four entities are iterated over and homeassistant.turn_on (or off) is called. I measured the time between each of the calls. The results are (in seconds):

For the toggle ‘on’ (which means calling homeassistant.turn_on on each of them):

0.962460
0.185220
0.952443

For the toggle ‘off’ (which means calling homeassistant.turn_of on each of them):

0.101885
1.907397
0.142208

there is clearly a split between the ones that are instantaneous (~0.1s) and the ones that take ~1-2s. Since the resolution time on HA side is 1 second that could explain it (maybe there are other reasons, such as the resolution on AppDaemon side).

I will switch back the classification you applied to that question to get AppDeamon users more involded.

Why do you need to iterate over the entities when you set them all to the same state? Just use “operation(group)” instead of looping through the entities.

Oh yes, this is a very good point - I missed that (obvious) solution, thanks.
The delay is still there but the code is cleaner.

This (and the fact that there is the same delay when switching the group through Lovelace) means that there is something weird in the way the group entities are switched.
When clicking on any single light the switch is instantaneous, but when they are switched at once as a group the delays between the lights are very noticeable.

Try writing a simple automation to switch the group. If that has no delay then it’s AppDaemon’s issue.

The exact same behaviour is from Lovelace, when I click on the group switch vs. clicking on the individual switches.

How are the lights connected to Home Assistant? Did you observe the same behaviour with the light groups you used previously?

Also tell us about your home assistant hardware (Pi?) and network connection (Wifi?) method.

Also your mqtt broker details.

The lights are connected to a MEROSS strip, I am investigating that part in parallel (to check for possible throttling)

I added an animated GIF to show the difference between group and single device switch: https://imgur.com/w2MPNck

Looks like network congestion or mqtt broker bottleneck to me. Fill us in on the details I asked above.

I would like to hold on with that question for a moment - while I investigate.
I do appreciate very, very much your help so far.

The reason is that I realized I have another group of switches (twice the number of devices) made of ESPHome devices and this group, when switched the same way as the lagging one - switches instantaneously.

OTOH all four lights in the slow setup are connected to a MEROSS strip, which is integrated though the MEROSS cloud. I do not see any noticeable lag when switching a single port of that strip (I am actually quite surprised by that, I expected lags) and maybe there is a throttling mechanism (either in the component, or on the API endpoint).

As for the network, I have a Wi-Fi built upon Ubiquity Unifi APs, a MQTT broker running in a docker container and plenty, plenty of services (internal and external) that rely on this network. While of course everything is possible, I have doubts about congestion or MQTT - looking at all the other services.

This single device integrated via an external could (and the throttling which can derive form there) really seems to be the core issue.

Please bear with me while I investigate that part - again I do appreciate enormously your help.

Cloud rate limiting.

I assume this is the problem. I could imagine that it queues the requests if multiple ones (switch light x, y and z on )are sent as one. This would also explain why it works when you press one switch after the other, because then you are always only sending one command at a time.

1 Like

Yes, I am digging into the code to check for throttling client-side, and timing a tcpdump session to see whether a server-side throttling is not present either (or as well).

This is what threw me:

I thought it was all local (unless you were using a cloud mqtt broker).