I have the same problem and additional that a reboot of HA took 4 min a start of the hole VM to HA Website took 45sec. There is a timegap of 220 sec where nothing happens. On a Nuc10i7
Turns out my issue with deCONZ was due to an outdated version of the addon, I was on 5.3.2. It took a few addon reload from CLI before I could download 6.6.5 which solved that part of the issue for me.
I still have the bind failed message at the login promt.
fwiw Iām getting bind failed on my VirtualBox install, no deCONZ, system loads up right away though so none of the issues here except the error messaging.
This is definitely not related to deCONZ / Conbee or USB.
Iām getting the same error using ESXI and a very clean HassOSā¦ no deCONZ /Conbee/ USB attachedā¦
Looks like something with the HassOS and virtualization (following @mchangsp that also happened with Proxmox)
I captured this error that might lead somewhere:
21-02-09 11:05:06 ERROR (MainThread) [asyncio] Task exception was never retrieved
future: <Task finished name='Task-2415' coro=<_websocket_forward() done, defined at /usr/src/supervisor/supervisor/api/ingress.py:277> exception=ConnectionResetError('Cannot write to closing transport')>
Traceback (most recent call last):
File "/usr/src/supervisor/supervisor/api/ingress.py", line 284, in _websocket_forward
await ws_to.send_bytes(msg.data)
File "/usr/local/lib/python3.8/site-packages/aiohttp/web_ws.py", line 307, in send_bytes
await self._writer.send(data, binary=True, compress=compress)
File "/usr/local/lib/python3.8/site-packages/aiohttp/http_websocket.py", line 685, in send
await self._send_frame(message, WSMsgType.BINARY, compress)
File "/usr/local/lib/python3.8/site-packages/aiohttp/http_websocket.py", line 650, in _send_frame
self._write(header + message)
File "/usr/local/lib/python3.8/site-packages/aiohttp/http_websocket.py", line 660, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] udev.sh: executing...
[22:33:23] INFO: Setup udev backend inside container
[22:33:23] INFO: Update udev information
[cont-init.d] udev.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
I started getting this error yesterday. Iām on a fresh HA install via Proxmox following whiskerzā guide, so neither deCONZ or anything else has been installed. Seems like docker/esxi/proxmox might be a common denominatorā¦
I applied the update and it took about 20 mintues to come back. This is the same issue I have on reboot normally. It is doing a rebuild/verification in the background.
I get the same slowness on looking at temp data. I looked previous and had a 3gig data file, the one that holds all the sensor data. I really need to look and clear that file out. I am not worried about my history, until it gets rebuilt, normally.
I wish I could remember where to set/reset the console logging to show all items on the screen. Once supervisor loads it presents to login info and stops showing POST information.
I have/had the same issue, running a virtualbox VM on windows 10. Restarting took forever and the already mentioned error above here.
For a few minutes ago I changed the USB port in Virtualbox VM settings from USB 2.0 to USB 1.1 and it seems to start much faster, as in normal timeframe, although I still see the āudevd address in useā error message.
I tried digging around using the CLI, and found a lot of multicast errors in the supervisor logs. Could this be the cause, or just a symptom? I also noticed my HA VM is not getting an IP address via DHCP.
I donāt believe it is a HassOS issue. I use supervisor on Ubuntu 20.04 on VirtualBox on MacOS, also Debian on Parallels on Mac M1. Both have this error showing in the VM.