Multiple lights now turn on one by one instead of immediately since update

I’m not using z-wave. Since that is a different protocol we shouldn’t really be comparing. However, just out of interest with your z-wave lights do they all turn off one by one as well??

It doesn’t matter what your protocal is, home assistant fires messages one by one. It will never be concurrent.

I’d look at whatever method you are using, ethernet, zwave, zigbee, etc and monitor the traffic. I’d be willing to bet something may be flooding it.

FWIW, I’ve used X10 and UPB for many years and can confirm that if you don’t use their native groups, you get the so-called ‘popcorn effect’ where the lights activate in sequence. That’s despite the fact these two technologies transmit their signals via the powerline (no multi-hop mesh network overhead) and UPB latency is typically <0.1 seconds.

However, if I use UPB’s native group functionality (the UPB term is ‘scene’) then all umpteen members of the scene activate simultaneously because only one scene command needs to propagate through the powerline.

What I mean is z-wave might work in a different way - making a delay I inevitable. I don’t know enough about z-wave to comment though.

If latency were the issue though then surely turning the lights off would also have the same behaviour, but it is always instant using groups to turn them off. And without doubt in the past before I updated it was definitely instant for turning them on.

They are cheap lights so I don’t think they have an inbuilt group ability. However, it is worth noting that if I do it through the app there is no delay - not ever. So it is specific to home assistant that I am now getting this delay. I never once had a delay before the update I’m certain of that.

I’m wondering if I should get another memory card and put an old version of HA on it to really test this out.

That’s the indisputable benchmark in this discussion. You are able to use the app to demonstrate a group of these lights can be turned on, in what is observed to be, concurrently.

Assuming this lighting protocol has no native group feature, the sheer speed of Wi-Fi masks the fact multiple commands are sent in sequence.

Nevertheless, something is clearly different in how the task is performed by the app vs Home Assistant. Wild guess: maybe the app doesn’t wait for acknowledgement from each light, before transmitting the command to the next light, whereas Home Assistant does. However, this would not explain why both the app and Home Assistant appeared to work equally well in a previous version, unless something recently changed in Home Assistant. :man_shrugging:

+1
Comparing the two side-by-side would add another data point. If they were to behave identically, you would know something changed in the network. If they behave as you remember it (previous version handled groups better) then we have more proof the problem is software related.

I am guessing home assistant behaviour has changed to wait for confirmation the light has turned on. Maybe this was added so that some other feature worked correctly and the popcorn effect was an accepted side effect. If that is the case it would be a nice new addition to HA if there was an option to cleanly stop this popcorn effect without having to run scripts within scripts.

Is there a way to downgrade home assistant easily without starting fresh to test this on an old version?

I would consider this situation to be an opportunity to create a separate test system. All you need is an old laptop/netbook running Linux. I use a 10+ year-old Dell Vostro with ubuntu loaded on an inexpensive SSD and Home Assistant installed as a docker container (it makes upgrading/downgrading a trivial task).

Given that your lighting uses Wi-Fi, and no special dongle/transceiver/hub is needed for it, you can run both systems, production and test, simultaneously and observe their behavior.

It’s also less disruptive because your existing production system remains untouched.

Hi,

It took me all day but I managed to get version 0.80.0 running via docker on Ubuntu. I can confirm with both the systems running that 0.80.0 does not have the popcorn effect but the latest update does. So for sure there has been a change.

Good work! Now you can easily switch from one version of Home Assistant to another (each version will be in a separate docker container). This allows you to find the version, somewhere in the 13 versions between 0.80 and 0.93, that introduced the popcorn effect.

For example, jump version 0.86. If no popcorn then jump to the next halfway point (0.89). If you get popcorn then switch to the previous one (0.88). When you eventually find the version that introduces the undesirable effect, let me know and I’ll pitch in to review the Release Notes and PRs to find whatever is responsible for it.

BTW, you may wish to consider using docker-compose. All of docker’s long-winded command-line options are centralized in a single YAML file. It ends up looking something like this (barebones example:

version: '3'
services:
  homeassistant:
    container_name: homeassistant
    image: homeassistant/home-assistant:0.91.0
    network_mode: "host" # maps all ports 1 to 1, no port mapping required
    volumes:
      - /home/somebody/docker/homeassistant:/config
      - /etc/localtime:/etc/localtime:ro
    restart: unless-stopped

Slight problem lol. I’m not entirely sure how you switch versions. I know that sounds odd as I have done it once already, but it’s hard to wrap your head around docker.

The way I did it is SSL the laptop and run:

hassio homeassistant update --version=0.80.0

Then what it did was in the home assistant container it downloaded another tag (I think I’m using the correct terms here). So I assume if I run the command again it will download the other version and create another tag. I’m not sure how I would switch between them though.

It didn’t create a different container though.

Funnily enough, I started with docker-compose and never became familiar with the command line options. However, I suspect there’s direct correlation between the two. You can specify the version at the end of the image’s name. With docker-compose you would then issue the command docker-compose up to make it update the image. In other words, it will pull the new version’s image (~ 2Gb).

I’m trying the same thing now with v0.86.0 it seems to be working. As in the new tag has appeared. Will update soon

Ok… Found where it started…

Bug in 0.91.0b0
No bug in 0.90.2

So whatever has caused this was introduced in version 0.91.0b0 and every version since.

I scanned the very long list of changes implemented in 0.91.0 and … couldn’t find anything that jumped out as being related to this issue. :frowning:

I’ll take a closer look tomorrow.

PS
There was a PR about something having to do with the number of threads and parallel updates but I’m uncertain if it is germane.

https://github.com/home-assistant/home-assistant/pull/22149


EDIT

Check out this PR introduced in version 0.90 (not 0.91). It affected the flux_led component. I skimmed the discussion and noticed talk about a delay.

https://github.com/home-assistant/home-assistant/pull/20733

Looks like they were messing with the flux_led.py which I have my own custom component for. So that shouldn’t be the cause unless my custom component isn’t being loaded. However…I do remember I used to get a warning message about loading a custom component for flux_led and I don’t get that now. I wonder if my custom component isn’t being loaded and the one that was changed is bugged. I noticed it added a line with a sleep statement so that could easily be the cause if home assistant is using this and not my custom component.

I will try tomorrow to put an intentional error in my code to see if home assistant throws an error. If not then it’s obviously not using my custom flux_led.py.

That would be irritating as the guy was warned about changing it and breaking stuff for other users lol.

I’m almost certain that will be the problem: the sleep statement. At a quick glance it looks to be set for 2 seconds, pretty much the delay I’m getting.

I have a feeling the location I need to put my custom flux_led.py has changed and it’s no longer loading it. I have it in custom_components/light/flux_led.py and I think that may have changed to just be in custom_components/flux_led.py.

Looks like I was right about my custom component not being loaded…things have changed considerably:

Would have been good for a warning to have appeared about that, but I know now. I’ll have a tinker tomorrow and get it loading again. I’m sure this will fix it. I’ll keep you posted.

I will also look into why it was deemed a good idea /necessary to introduce a sleep statement into the flux_led.py - I’m pretty sure whatever it was done for it was a bad workaround and needs to be changed for future releases.

Are you sure it’s not in the log (the detailed version )? The custom components I use always get reported in the log. Unless you mean it didn’t report that it found but did not load your custom component. In my case where I use a customized version of the MQTT climate component, I assigned it a completely different platform name (my_mqtt instead of mqtt). This ensures that Home Assistant cannot substitute the original mqtt version. If it can’t find the my_mqtt component, it will fail with a boatload of error messages.

I believe the sleep statement was added to fix a specific type of light, for setting rgb (I think). However, the test suite may have overlooked to check if the modification affected the control of a group of lights.