Multiple lights now turn on one by one instead of immediately since update

Of course I know how ridiculous that code is :stuck_out_tongue: . The whole point I am making is that if you use scripts like that then it works, I can turn on all the lights instantly. If I do it the sensible way using groups it doesnā€™t work - they turn on slowly one by one.

Please read the GitHub link in my first post - someone else had the same issue and found that using that ridiculous scripts running scripts code to work too (thatā€™s where I got the idea, I never would have thought to try something coded like that normally). It seems that running it like that maybe causes home assistant to run each script in the background, but doing it the normal way home assistant is waiting for something before turning on each light causing a delay (which it didnā€™t do in the past). It seems like a bug to me, unless the devs had to change the way it turned things on - maybe now it waits for confirmation the light is turned on before moving on to the next one where before it didnā€™t? Anyone else have issues with this slowly turning on one by one effect??? Incidentally, they all turn OFF instantly every time - so the bug/issue is purely with turning things on.

Mea culpa; I now understand the reason for the Rube Goldberg approach. As per your theory, itā€™s very possible each unique script is spawned as a separate process therefore the lights activate concurrently. In contrast, the group appears to be handled by a single process therefore the lights activate sequentially. However, you believe groups used to be processed differently because, in a previous version, your grouped lights activated concurrently.

Timeline:

  • GitHub issue reports lack of group concurrency in version 0.80.
  • Group concurrency must have been introduced in a later version (specific version is unknown; in the 0.8X range) because you observed it.
  • Group concurrency is no longer evident in 0.93 version.

The implication is there may have been a regression error introduced in a recent version. The trick is to find the PR that may have done it.

@123 @gary.james.stanton

Group concurrency has never existed, Iā€™ve used every version from 0.32+ 0.19+ pretty much and turning on/off lights in a group has always been sequential. Sometimes, itā€™s very fast and appears concurrent, sometimes itā€™s slow. I always attributed that to my zwave network latency.

Edit:2023 update. This comment is no longer true. Parallel on services naturally occur under the hood. It has been this way for a couple of years now.

1 Like

This remains my leading suspect for the observed behavior. However, garyā€™s description of the observed behavior suggests a dramatic difference from the past (activation used to the simultaneous vs now noticeably staggered). Thatā€™s why I havenā€™t completely dismissed the possibility that groups underwent a modification. However, although I canā€™t claim perfect recall, I donā€™t remember reading anything in the Release Notes about new group behavior (at least since version 0.79). So Iā€™m still straddling the fence on this one but leaning more towards latency ā€¦


For the uninitiated, unless the lighting technology supports the concept of a native group (what I call a hardware-based group), that can be controlled using a single command, all other efforts to simulate a group (a so-called software-based group) will involve sending a command to each group member (so a stream of six commands is transmitted to six members of a group). The perception of concurrency will largely depend on the lighting technologyā€™s processing speed.

There are propagation delays for the multiple messages plus the potential need for acknowledgement (command is sent to turn on the light and then waits for light to reply with acknowledgement before sending command to the next light).

In contrast, for a lighting technology that supports native groups, one command is received by all members of the group at the same moment in time (or at least within microseconds of one another) so concurrency is virtually guaranteed and not subject to propagation delays.

Iā€™m not using z-wave. Since that is a different protocol we shouldnā€™t really be comparing. However, just out of interest with your z-wave lights do they all turn off one by one as well??

It doesnā€™t matter what your protocal is, home assistant fires messages one by one. It will never be concurrent.

Iā€™d look at whatever method you are using, ethernet, zwave, zigbee, etc and monitor the traffic. Iā€™d be willing to bet something may be flooding it.

FWIW, Iā€™ve used X10 and UPB for many years and can confirm that if you donā€™t use their native groups, you get the so-called ā€˜popcorn effectā€™ where the lights activate in sequence. Thatā€™s despite the fact these two technologies transmit their signals via the powerline (no multi-hop mesh network overhead) and UPB latency is typically <0.1 seconds.

However, if I use UPBā€™s native group functionality (the UPB term is ā€˜sceneā€™) then all umpteen members of the scene activate simultaneously because only one scene command needs to propagate through the powerline.

What I mean is z-wave might work in a different way - making a delay I inevitable. I donā€™t know enough about z-wave to comment though.

If latency were the issue though then surely turning the lights off would also have the same behaviour, but it is always instant using groups to turn them off. And without doubt in the past before I updated it was definitely instant for turning them on.

They are cheap lights so I donā€™t think they have an inbuilt group ability. However, it is worth noting that if I do it through the app there is no delay - not ever. So it is specific to home assistant that I am now getting this delay. I never once had a delay before the update Iā€™m certain of that.

Iā€™m wondering if I should get another memory card and put an old version of HA on it to really test this out.

Thatā€™s the indisputable benchmark in this discussion. You are able to use the app to demonstrate a group of these lights can be turned on, in what is observed to be, concurrently.

Assuming this lighting protocol has no native group feature, the sheer speed of Wi-Fi masks the fact multiple commands are sent in sequence.

Nevertheless, something is clearly different in how the task is performed by the app vs Home Assistant. Wild guess: maybe the app doesnā€™t wait for acknowledgement from each light, before transmitting the command to the next light, whereas Home Assistant does. However, this would not explain why both the app and Home Assistant appeared to work equally well in a previous version, unless something recently changed in Home Assistant. :man_shrugging:

+1
Comparing the two side-by-side would add another data point. If they were to behave identically, you would know something changed in the network. If they behave as you remember it (previous version handled groups better) then we have more proof the problem is software related.

I am guessing home assistant behaviour has changed to wait for confirmation the light has turned on. Maybe this was added so that some other feature worked correctly and the popcorn effect was an accepted side effect. If that is the case it would be a nice new addition to HA if there was an option to cleanly stop this popcorn effect without having to run scripts within scripts.

Is there a way to downgrade home assistant easily without starting fresh to test this on an old version?

I would consider this situation to be an opportunity to create a separate test system. All you need is an old laptop/netbook running Linux. I use a 10+ year-old Dell Vostro with ubuntu loaded on an inexpensive SSD and Home Assistant installed as a docker container (it makes upgrading/downgrading a trivial task).

Given that your lighting uses Wi-Fi, and no special dongle/transceiver/hub is needed for it, you can run both systems, production and test, simultaneously and observe their behavior.

Itā€™s also less disruptive because your existing production system remains untouched.

Hi,

It took me all day but I managed to get version 0.80.0 running via docker on Ubuntu. I can confirm with both the systems running that 0.80.0 does not have the popcorn effect but the latest update does. So for sure there has been a change.

Good work! Now you can easily switch from one version of Home Assistant to another (each version will be in a separate docker container). This allows you to find the version, somewhere in the 13 versions between 0.80 and 0.93, that introduced the popcorn effect.

For example, jump version 0.86. If no popcorn then jump to the next halfway point (0.89). If you get popcorn then switch to the previous one (0.88). When you eventually find the version that introduces the undesirable effect, let me know and Iā€™ll pitch in to review the Release Notes and PRs to find whatever is responsible for it.

BTW, you may wish to consider using docker-compose. All of dockerā€™s long-winded command-line options are centralized in a single YAML file. It ends up looking something like this (barebones example:

version: '3'
services:
  homeassistant:
    container_name: homeassistant
    image: homeassistant/home-assistant:0.91.0
    network_mode: "host" # maps all ports 1 to 1, no port mapping required
    volumes:
      - /home/somebody/docker/homeassistant:/config
      - /etc/localtime:/etc/localtime:ro
    restart: unless-stopped

Slight problem lol. Iā€™m not entirely sure how you switch versions. I know that sounds odd as I have done it once already, but itā€™s hard to wrap your head around docker.

The way I did it is SSL the laptop and run:

hassio homeassistant update --version=0.80.0

Then what it did was in the home assistant container it downloaded another tag (I think Iā€™m using the correct terms here). So I assume if I run the command again it will download the other version and create another tag. Iā€™m not sure how I would switch between them though.

It didnā€™t create a different container though.

Funnily enough, I started with docker-compose and never became familiar with the command line options. However, I suspect thereā€™s direct correlation between the two. You can specify the version at the end of the imageā€™s name. With docker-compose you would then issue the command docker-compose up to make it update the image. In other words, it will pull the new versionā€™s image (~ 2Gb).

Iā€™m trying the same thing now with v0.86.0 it seems to be working. As in the new tag has appeared. Will update soon

Okā€¦ Found where it startedā€¦

Bug in 0.91.0b0
No bug in 0.90.2

So whatever has caused this was introduced in version 0.91.0b0 and every version since.

I scanned the very long list of changes implemented in 0.91.0 and ā€¦ couldnā€™t find anything that jumped out as being related to this issue. :frowning:

Iā€™ll take a closer look tomorrow.

PS
There was a PR about something having to do with the number of threads and parallel updates but Iā€™m uncertain if it is germane.

https://github.com/home-assistant/home-assistant/pull/22149


EDIT

Check out this PR introduced in version 0.90 (not 0.91). It affected the flux_led component. I skimmed the discussion and noticed talk about a delay.

https://github.com/home-assistant/home-assistant/pull/20733