That probably is related to multicast DNS. So some component you use probably uses that for wahtever reason. On my system only the avahi-daemon is using that port. So it’s not there by default (with discovery disabled).
I’ve the same issue in my home assistant docker installation. The ha process binds itself multiple times on interface 0.0.0.0:5353 and this leads to UDP4 Sockets Memory issues.
@danielperna84: are you running home assistant on docker? If you run the netstat -ulpna command, you’re not seeing this issue?
Well, it’s been a while since my last post. Now I do see python listening on 5353. The traffic flowing through that port is, as stated above, multicast DNS. In my case it’s mainly Google Cast devices, but also other devices announcing whatever they have to announce. So this is fine I guess, since that’s just how those types of devices communicate.
Yep, sorry for bumping such an old post, but with this time staying at home, and a long weekend ahead, I decided to try and tackle the issue
My problem is that if I disable the net=host mode the problem goes away, but I lose all these nifty features like PS4 discovery (doesn’t work even if I expose the right ports) and I can’t turn on my lg tv.
And if I enable it, then the UDP4 Socket Memory gets filled up pretty quick, and then I start getting notification from netdata about the issue. This is also why I discovered the problem.
There was this issue and some interesting findings by other users , but it was closed and no other updates on the matter were posted. I think the problem is present for many users, it’s just that probably most don’t even notice because they don’t have a monitoring system on the status of their network (like netdata is pointing out for me).
Yes, I agree that the problem probably exists for others but stays unnoticed. The problem is, that it’s not causing issues severe enough to motivate anyone investigating for a proper solution. So I guess there’s no other choice than waiting until somebody does get motivated.
I am having precisely the same issue. I only became aware of it because netdata kept throwing udp errors. Using netstat I saw some python process was using 5353 heavily. Shut down the HA docker and the buffer cleared. I gradually increased the size of the buffer, and after restarting the container HA slowly fills it to net.core.rmem_max.
EDIT: For those stumbling in on this, this is supposedly fixed.
Hi,
I got still this issues, on Netdata I’m getting every 6 min a notification about this error: “1m ipv4 udp receive buffer errors = 1317 errors”
Any tip how to get rid of it permanently?
I’m running HA on Python VM
EDIT: found this https://github.com/netdata/netdata/issues/6527