Something on your host is already using a required network port. Check with netstat -tulpen as root user on the host, which process has bound to this port.
Indeed. And not a single docker-proxy process, so I’ll assume that you did run netstat while HA was stopped. Which makes me think that HA OS is causing trouble during startup, but as I don’t use HA OS, I am unable to provide further help.
I was having these issues as well using HassOS 5.12 in a Proxmox VM and running the latest supervisor (2021.03.6) and core (2021.4.3).
I did the following and it seems to have resolved this error:
stopped HA - Configuration>Server Controls>Server Management>stop
logged in through SSH to my HA VM on the Proxmox server and ran the following commands:
sudo netstat -tnlp | grep :4357 - it showed a tcp6 process listening on this port
sudo fuser -k tcp/4357 - killed that process that was listening
started HA by running ‘ha core start’ and ran ‘ha supervisor repair’ from SSH terminal and once that had finished, I restarted the HA server - Configuration>Server Controls>Server Management> restart
and when the server came back up, the errors in the original post were gone from the supervisor logs - Supervisor>System controls>Log Provider - supervisor and so far have not come back after several server restarts.
Thanks for shedding some light. Is it still OK for you?
In my case (RPi), the command sudo netstat -tnlp | grep :4357, gives this output: tcp6 0 0 :::4357 :::* LISTEN -
So something is definitely listening (although there is no PID).
For fuser, when I run it gives me this error:
~ $ fuser -k tcp 4357
fuser: can't stat 'tcp': No such file or directory
After some digging I found that in this format it gives no error but still does not close the process that keeps listening still:
‘~ $ sudo fuser -k 4357/tcp’
Sorry, can’t believe I missed the slash and the sudo part of the command.
This happened on my server as well.
After 20 minutes or so of me running the ‘sudo netstat -tnlp | grep :4357’ and getting responses saying that something was listening on that port but with no PID.
I decided to go ahead anyway and run ‘ha core start’ and then ‘ha supervisor repair’.
The repair took ages to run, so I just left it and came back to it after 45 minutes or so. The server was running with no error’s in the Supervisor log.
Since, repairing my HA supervisor I have rebooted a lot with updates etc, and these error’s have not returned.
I’m not sure why the errors have come back on your server. The only thing I can think of is did you did stop the HA server before you killed the process?
Periodically it also happens to me when I update the Observer, I fix it with a restart of docker: “sudo systemctl restart docker” however I have disabled IPV6 on docker and on my Debian host, let’s see if it will do it again.