Multiple lights now turn on one by one instead of immediately since update

That’s the indisputable benchmark in this discussion. You are able to use the app to demonstrate a group of these lights can be turned on, in what is observed to be, concurrently.

Assuming this lighting protocol has no native group feature, the sheer speed of Wi-Fi masks the fact multiple commands are sent in sequence.

Nevertheless, something is clearly different in how the task is performed by the app vs Home Assistant. Wild guess: maybe the app doesn’t wait for acknowledgement from each light, before transmitting the command to the next light, whereas Home Assistant does. However, this would not explain why both the app and Home Assistant appeared to work equally well in a previous version, unless something recently changed in Home Assistant. :man_shrugging:

+1
Comparing the two side-by-side would add another data point. If they were to behave identically, you would know something changed in the network. If they behave as you remember it (previous version handled groups better) then we have more proof the problem is software related.

I am guessing home assistant behaviour has changed to wait for confirmation the light has turned on. Maybe this was added so that some other feature worked correctly and the popcorn effect was an accepted side effect. If that is the case it would be a nice new addition to HA if there was an option to cleanly stop this popcorn effect without having to run scripts within scripts.

Is there a way to downgrade home assistant easily without starting fresh to test this on an old version?

I would consider this situation to be an opportunity to create a separate test system. All you need is an old laptop/netbook running Linux. I use a 10+ year-old Dell Vostro with ubuntu loaded on an inexpensive SSD and Home Assistant installed as a docker container (it makes upgrading/downgrading a trivial task).

Given that your lighting uses Wi-Fi, and no special dongle/transceiver/hub is needed for it, you can run both systems, production and test, simultaneously and observe their behavior.

It’s also less disruptive because your existing production system remains untouched.

Hi,

It took me all day but I managed to get version 0.80.0 running via docker on Ubuntu. I can confirm with both the systems running that 0.80.0 does not have the popcorn effect but the latest update does. So for sure there has been a change.

Good work! Now you can easily switch from one version of Home Assistant to another (each version will be in a separate docker container). This allows you to find the version, somewhere in the 13 versions between 0.80 and 0.93, that introduced the popcorn effect.

For example, jump version 0.86. If no popcorn then jump to the next halfway point (0.89). If you get popcorn then switch to the previous one (0.88). When you eventually find the version that introduces the undesirable effect, let me know and I’ll pitch in to review the Release Notes and PRs to find whatever is responsible for it.

BTW, you may wish to consider using docker-compose. All of docker’s long-winded command-line options are centralized in a single YAML file. It ends up looking something like this (barebones example:

version: '3'
services:
  homeassistant:
    container_name: homeassistant
    image: homeassistant/home-assistant:0.91.0
    network_mode: "host" # maps all ports 1 to 1, no port mapping required
    volumes:
      - /home/somebody/docker/homeassistant:/config
      - /etc/localtime:/etc/localtime:ro
    restart: unless-stopped

Slight problem lol. I’m not entirely sure how you switch versions. I know that sounds odd as I have done it once already, but it’s hard to wrap your head around docker.

The way I did it is SSL the laptop and run:

hassio homeassistant update --version=0.80.0

Then what it did was in the home assistant container it downloaded another tag (I think I’m using the correct terms here). So I assume if I run the command again it will download the other version and create another tag. I’m not sure how I would switch between them though.

It didn’t create a different container though.

Funnily enough, I started with docker-compose and never became familiar with the command line options. However, I suspect there’s direct correlation between the two. You can specify the version at the end of the image’s name. With docker-compose you would then issue the command docker-compose up to make it update the image. In other words, it will pull the new version’s image (~ 2Gb).

I’m trying the same thing now with v0.86.0 it seems to be working. As in the new tag has appeared. Will update soon

Ok… Found where it started…

Bug in 0.91.0b0
No bug in 0.90.2

So whatever has caused this was introduced in version 0.91.0b0 and every version since.

I scanned the very long list of changes implemented in 0.91.0 and … couldn’t find anything that jumped out as being related to this issue. :frowning:

I’ll take a closer look tomorrow.

PS
There was a PR about something having to do with the number of threads and parallel updates but I’m uncertain if it is germane.

https://github.com/home-assistant/home-assistant/pull/22149


EDIT

Check out this PR introduced in version 0.90 (not 0.91). It affected the flux_led component. I skimmed the discussion and noticed talk about a delay.

https://github.com/home-assistant/home-assistant/pull/20733

Looks like they were messing with the flux_led.py which I have my own custom component for. So that shouldn’t be the cause unless my custom component isn’t being loaded. However…I do remember I used to get a warning message about loading a custom component for flux_led and I don’t get that now. I wonder if my custom component isn’t being loaded and the one that was changed is bugged. I noticed it added a line with a sleep statement so that could easily be the cause if home assistant is using this and not my custom component.

I will try tomorrow to put an intentional error in my code to see if home assistant throws an error. If not then it’s obviously not using my custom flux_led.py.

That would be irritating as the guy was warned about changing it and breaking stuff for other users lol.

I’m almost certain that will be the problem: the sleep statement. At a quick glance it looks to be set for 2 seconds, pretty much the delay I’m getting.

I have a feeling the location I need to put my custom flux_led.py has changed and it’s no longer loading it. I have it in custom_components/light/flux_led.py and I think that may have changed to just be in custom_components/flux_led.py.

Looks like I was right about my custom component not being loaded…things have changed considerably:

Would have been good for a warning to have appeared about that, but I know now. I’ll have a tinker tomorrow and get it loading again. I’m sure this will fix it. I’ll keep you posted.

I will also look into why it was deemed a good idea /necessary to introduce a sleep statement into the flux_led.py - I’m pretty sure whatever it was done for it was a bad workaround and needs to be changed for future releases.

Are you sure it’s not in the log (the detailed version )? The custom components I use always get reported in the log. Unless you mean it didn’t report that it found but did not load your custom component. In my case where I use a customized version of the MQTT climate component, I assigned it a completely different platform name (my_mqtt instead of mqtt). This ensures that Home Assistant cannot substitute the original mqtt version. If it can’t find the my_mqtt component, it will fail with a boatload of error messages.

I believe the sleep statement was added to fix a specific type of light, for setting rgb (I think). However, the test suite may have overlooked to check if the modification affected the control of a group of lights.

I can confirm that it is the flux_led component script that is causing the popcorn issue. A sleep statement was introduced and is the culprit. Previously I had a custom component using the old flux_led.py file that did not have a sleep statement in and was altered significantly by myself to work correctly (the old flux_led.py was really buggy). However, there was a breaking change for custom components that I wasn’t aware of - the way you set this up is now completely different.

To fix I made a directory:

config/custom_components/flux_led (config should be the same directory your configuration.yaml file will be)

I added the following files (tagged as code or the underscores don’t show up):

__init__.py (this was just a blank file - it is still required to be there though)
light.py (this was my previous flux_led.py just renamed)
manifest.json (taken from the "home assistant/components/flux_led" folder)

If anyone needs it the manifest.json looks like this:

{
  "domain": "flux_led",
  "name": "Flux led",
  "documentation": "https://www.home-assistant.io/components/flux_led",
  "requirements": [
    "flux_led==0.22"
  ],
  "dependencies": [],
  "codeowners": []
}

Since the latest flux_led code seems to not be buggy now and has other updates I then transferred the light.py file from “home assistant/components/flux_led /light.py” to my config/custom_components/flux_led folder and commented out the sleep line. The popcorn effect is gone.

The reason they added this was just so the frontend would update quickly - without the sleep line you have to wait for the frontend to update but at the expense of causing a popcorn effect. I wonder if there is a better way to update the frontend without a sleep statement??? But for me having a group of lights turn on immediately for scenes etc trumps having the frontend update smoothly.

I guess one solution is to add an argument to let the user choose if they want to prevent the sleep statement for the purpose of turning on groups.

Just to satisfy my own curiosity, originally you reported 0.90 worked but 0.91 had the bug. However, after reviewing their Release Notes, I found the flux_led component changed in 0.90 so the bug would’ve been evident in 0.90, 0.91, etc. How was it that you found it worked in 0.90? Was it because your custom component was loaded successfully in that version but failed to load in 0.91?

Yes because “the great migration” (which I just noticed you made a post about before so you’ve had to deal with this too lol) happened in 0.91

So yes before 0.91 my custom component would have loaded.

I’m pretty sure there is a better way to deal with this. Maybe this?

https://developers.home-assistant.io/docs/en/entity_index.html

Yeah, that’s why I suggested creating your own platform name (like my_flux_led). This way Home Assistant can’t ‘fall back’ to using flux_led because your config entries are explicitly requesting to use the my_flux_led platform. In addition, when perusing the config file, it serves as a reminder that your lights are using your custom component as opposed to the stock component.

For example, my climate entity uses a custom component derived from Home Assistant’s climate component for the MQTT platform. The config entry begins like this:

climate:
 - platform: my_mqtt
   name: "Thermostat"

The component is located in custom_components/my_mqtt/climate.py. For Home Assistant, this is now a different platform from mqtt. Should it ever fail to locate my custom component, it will not substitute the mqtt platform but report an error.