All my hue lights randomly becoming unavailable

Since Update to 0.94(.1) all my hue devices are unavailable. I have removed and readded the hue-integration, but it doesn’t really change. After a restart I reliably have the hue basestation and a hue motion detector listed under integrations, but all lamps are and stay unavailable.
I even restored the old hue configuration in configuration.yaml, but that doesn’t change anything.

3 Likes

I’m getting this too. No idea why it’s suddenly started after working great previously. It’s having a very big knockon effect on the reliability of my Pi as a Hassio server as it discovers and undiscovers every few minutes and is savaging my MariaDB.

No idea why, but Hue is stable again. 0.94.3
All devices are recognized after start and work fine now.

Me too! Just magically fixed itself over the weekend without any significant intervention. Must be a server thing then!?

seeing this now (HA 0.94.3):

2019-06-18 13:47:29 ERROR (MainThread) [homeassistant.components.hue.sensor_base] Unable to reach bridge 192.168.1.212 ()

I’ve changed my zigbee channel in the hue hub https://www2.meethue.com/en-us/support/bridge/connectivity/what-is-a-zigbee-channel-change-in-the-philips-hue-app
and since then all my lights stay available.
My lights looked like this Philips Hue changed to unavailable
and now 41
Maybe some others can try this…

1 Like

I tried this last night after noticing it was only my upstairs hue lights that were dropping out - seemed stable overnight but same problems this morning with intermittent dropouts.

So I may have found a solution but it’s very specific to my case use.

Resetting Hue hub, changing channels and repositioning lights all had limited effect but the problems still persisted. I was getting frequent ‘Unavailable’ status updates for only my two upstairs Hue bulbs, and even then the unavailable episodes did not match up in time with each other.

I recall the problems started shortly after I migrated my DB from the SD card to a MariaDB on my Synology NAS but didn’t think the two could be related.

However, since reverting back to the SD card DB all my Hue lights have been rock solid. I think rather than an availability issue, it’s more of a “network stuttering prevents HA from reading last saved state from the DB” issue.

Will continue to monitor and update if it flakes out again, but worth considering if you have a similar setup.

FWIW, I had persistent problems with the Hue lights being unavailable on my RPI3.
I just moved to the NUC and they disappeared, haven’t seen this repeat at all in the last 24 hours (and the Hue bridge was “unreachable” from the Pi for the last 72 hours).

This also solved my “XXX has taken longer than 10 seconds” errors. I think the RPI once you get a few integrations installed just isn’t powerful enough to handle it and things start timing out.

Hi

I’m also experiencing this problem. But while working on an unrelated problem I noticed the following log lines in my daemon.log. (I have HA on an RPI 3B+ under venv).

Feb  2 01:14:25 hass hass[29548]: #033[31m2020-02-02 01:14:25 ERROR (Thread-3) [pychromecast.socket_client] [Home group:42228] Error reading from socket.#033[0m
Feb  2 01:14:25 hass hass[29548]: #033[33m2020-02-02 01:14:25 WARNING (Thread-3) [pychromecast.socket_client] [Home group:42228] Error communicating with socket, resetting connection#033[0m
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: carrier lost
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: deleting address *:*:*:*:8e4f
Feb  2 01:17:33 hass avahi-daemon[356]: Withdrawing address record for *:*:*:*:8e4f on wlan0.
Feb  2 01:17:33 hass avahi-daemon[356]: Leaving mDNS multicast group on interface wlan0.IPv6 with address *:*:*:*:8e4f.
Feb  2 01:17:33 hass avahi-daemon[356]: Interface wlan0.IPv6 no longer relevant for mDNS.
Feb  2 01:17:33 hass avahi-daemon[356]: Withdrawing address record for *.*.*.7 on wlan0.
Feb  2 01:17:33 hass avahi-daemon[356]: Leaving mDNS multicast group on interface wlan0.IPv4 with address *.*.*.7.
Feb  2 01:17:33 hass avahi-daemon[356]: Interface wlan0.IPv4 no longer relevant for mDNS.
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: deleting route to *.*.*.0/24
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: deleting default route via *.*.*.8
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: carrier acquired
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: IAID *:*:*:01
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: adding address *:*:*:*:8e4f
Feb  2 01:17:33 hass avahi-daemon[356]: Joining mDNS multicast group on interface wlan0.IPv6 with address *:*:*:*:8e4f.
Feb  2 01:17:33 hass avahi-daemon[356]: New relevant interface wlan0.IPv6 for mDNS.
Feb  2 01:17:33 hass avahi-daemon[356]: Registering new address record for *:*:*:*:8e4f on wlan0.*.
Feb  2 01:17:33 hass dhcpcd[505]: wlan0: soliciting an IPv6 router
Feb  2 01:17:34 hass hass[29548]: #033[31m2020-02-02 01:17:34 ERROR (MainThread) [homeassistant.components.hue.sensor_base] Unable to reach bridge *.*.*.4 ()#033[0m
Feb  2 01:17:34 hass dhcpcd[505]: wlan0: rebinding lease of *.*.*.7
Feb  2 01:17:34 hass dhcpcd[505]: wlan0: probing address *.*.*.7/24
Feb  2 01:17:39 hass dhcpcd[505]: wlan0: leased *.*.*.7 for infinity
Feb  2 01:17:39 hass avahi-daemon[356]: Joining mDNS multicast group on interface wlan0.IPv4 with address *.*.*.7.
Feb  2 01:17:39 hass avahi-daemon[356]: New relevant interface wlan0.IPv4 for mDNS.
Feb  2 01:17:39 hass avahi-daemon[356]: Registering new address record for *.*.*.7 on wlan0.IPv4.
Feb  2 01:17:39 hass dhcpcd[505]: wlan0: adding route to *.*.*.0/24
Feb  2 01:17:39 hass dhcpcd[505]: wlan0: adding default route via *.*.*.8
Feb  2 01:17:40 hass hass[29548]: #033[32m2020-02-02 01:17:40 INFO (MainThread) [homeassistant.components.hue.sensor_base] Reconnected to bridge *.*.*.4#033[0m
Feb  2 01:17:47 hass dhcpcd[505]: wlan0: no IPv6 Routers available

I get this sequence frequently:

  • DHCP bails out (note it shouldn’t as it get an infinity lease) and drops the path to the router.
  • HA fails to connect to the bridge
  • DHCP recovers and HA goes back to normal

Note the chromecast is not always there but I as this becomes unavailable from time to time too, I thought it might be related. The instability of the WIFI has been confirmed by broadcom. And a solution was provided here. I’ve not tested it but I’m going to and keep you posted.

Or, am I missing the point here.

1 Like

I’m having alot of issues, dont have a managed switch yet, but I think the network port on my hue bridge is a bit funky aswell…

When my hue lights stop responding, the stop responding for everyhting, HA, Official Hue app and even fysical switches and dimmers dont respond for about 15 seconds. After that everything recovers at the same time and I can switch the lights from all apps and switches again.

I alos have a constant ping running (as a binary sensor in HA) and the Hub seem to miss a coule of pings ever now and then, I have a fairly cheap 16-port TP-LINK networkswitch and quite alot of traffic going on all the time on my network, so it also might be my networkswitch.

So for everyone having this issue, it also maybe an hardware error on the hub side or your network switch…

I’m going to try a different network port on my switch for now, see how it goes, but am planning to by new (Ubiquiti) switches in the next couple of months anyways…

1 Like

I’ve also run into this issue recently, however I can pretty guarantee that the issue is the Hue hub (or possibly HA itself) as the switch I have mine on is a brand new Unifi US-8-60W. Looking at the events on my controller, there’s nothing showing DHCP failing, but my Hue hub is using a crap ton of data for some reason.

I’m still seeing my lights going randomly unavailable even with the .106 update. I did notice that it does seem to happen once every 5-10 minutes, so I’m going to look at my other integrations that update every 5 min or so and see if maybe that is the conflict.

Not sure if it’s a fluke or not (fingers crossed) I removed and readded the Hue integration. I also added the allow unreachable to my configuration, and so far I’m over an hour without a single unavailable message

Good to hear!

Please be a bit more specific about the order of the procedure:
Remove integration
Restart? All entities gone from registry ?
Re- add integration: same names/ entities and no suffixes _2?

Any yaml configuration to go with the config /integration ?

I ask because I am thinking to do the same but am a bit hesitant…

OK, I will say I was messing around trying a lot of different things. Initially my goal was to pair my hue to the homekit controller, as I wanted to see what I could do with hue zones in the controller. I eventually gave up and went to add it back. So this is generally what I believe I did.

Removed my hue intergration and restarted
Lights and hue sensors were gone
Spent a while trying to get it to work with Homekit (a few more restarts_, then gave up
added back the integration, everything came back, but the groups did not come back. I remembered that this is the default in .106, so I added the yaml

hue:
  bridges:
    - host: 192.168.1.xxx
      allow_unreachable: true
      allow_hue_groups: true

What was weird was that after a restart I saw two hue hubs to be added, one just had configure as an option, and the other had configure or ignore.

Initially, I added the one with ignore.configure and i just added the hub, and then after a few seconds the lights showed up too.
Then I went back and added the configure (no ignore) and it showed the hub and all lights as it was being added. It was showing two integrations, both with the same hue lights.

It kind of made sense, so I removed both, restarted, added the one with configure only, and then ignored the other intergration.

That’s where I’m at now, and still not a single unavailable light yet

Thanks!

Will follow your flow and see what happens.
Btw I will add a second hub so hope that won’t confuse things…

Hi

I have been strugling with the “Philips Hue unavailable” problem for quite a while, trying varius solutions without luck.

I am no computer wizard, so a lot of my installation/troubleshooting is ‘monkey see, monkey do’. Bear with me if the following is bull shit or uninterresting :slight_smile:

I have 2 HA installations: An old one “not Hass.io” on lubuntu where Philips Hue works just fine. But I can’t get it to upgrade Phyton from 3.6 to v 3.7! So I am stuck on HA version 1.03
In December -19 I installed a new HA server (ubuntu 18.04) to solve the Phyton problem - this one became a Hass.io installation, and my Philips Hue stopped working after the first HA reboot.
Tried all the suggestions I could find on the net - no luck.
So I ‘gave up’, and made a fall-back to the old machine (Thank god for virtualization :wink: )
Once in a while I start the new server and try whatever new ideas I have to solve the Hue problem.
Today I tried again - and had luck! What surprises me, was the “solution”:
Starting the server (more than a month since last time), it requires an apt update/upgrade of 63 packages - so I run this.
Starting HA in order to upgrade Hass.io to version 106.5 - and Hue works just fine already? Without any upgrade, change to config or whatever! Upgrade of Hass.io done - Hue still works.
Lots on restarts, config updates etc - so far I haven’t seen a single ‘Unavailable’.

Maybe my problem was somewhere in the ubuntu server? I have no idea, but maybe you guys know what to look for?

I also had problems with Hue and a system that randomly restarted due to a power supply issue. This weekend I upgraded my setup from RPi 3B+ with SD to RPi 4 with SSD and besides everything runs more smooth and snappier also my Hue config is working stable now. I run Hassio (keeping updated to the latest release) and last version of Hue sensors via HACS.

I also suddenly had this problem. Was running the same setup for months, but then suddenly it appeared. For me it turned out to be in relation to my UniFi Gear. The Hue Bridge crashes when the mDNS Reflector is turned on. If you turn it off in the settings, everything is fine again.

Its just weird to me that it was running fine for so long with this setting, and suddenly it didnt. Maybe a Firmware Update on the Hub or UniFi Gear changed something.