Ah, yeah I have the same PTR issue. The i/o timeouts on startup never resolved until I created a manual iptables rule, but I think it’s essentially the same issue. The DNS provider is 1) dropping the IPv6 query, causing the i/o timeout (notice the timeouts are all AAAA requests, which HA is prioritizing); and 2) sending the PTR requests out to the upstream DNS provider, which is returning NXDOMAIN (which is accurate).
It seems like your DNS server (assuming it’s provided by your internet provider) is responding in the way that Alpine doesn’t like. Additionally, they may be re-routing traffic to external DNS servers (i.e. attempts to reach 8.8.8.8 or 1.1.1.1) to their own DNS server, which might explain why some people have been able to fix their problems by changing the provider (Timeout while contacting DNS servers).
I haven’t dug into the PTR requests yet, but it’s my working theory anyhow. The HA team seems to have taken the stance that the way Alpine musl handles AAAA records is correct, although it is debatable. I don’t know much about Unifi, so I don’t know if you can run your own dnsmasq/Unbound instances, but I’ll give you a brief overview of my workflow.
- IPv6 disabled on router
- Unbound has IPv6 enabled (do-ip6: yes, prefer-ip6: no) to allow Alpine to get its AAAA requests
- dnsmasq serves DNS requests via its cache; if a request is not cached, it forwards the request upstream to Unbound
- All inbound and outbound IPv6 routes are dropped by iptables rules
- All requests from containers in the subnet 172.30.32.0/24 are routed back to the proper containers using NAT rules
My other working theory is that HA DNS requests from containers were previously translated within HA; so, when the request would go out, it would be come from the main HA instance (i.e. 192.168.1.x or whatever your local IP for HA is).
So, the container would send the request to HA’s dns, which would then forward it upstream. The response would then go back to HA, which would route the response back to the container.
I think the DNS requests are now leaking from the containers if IPv6 is disabled in Home Assistant, so the router sees the request coming directly from the container at 172.30.32.x, instead of HA. Thus, the response is sent to 172.30.32.x – which doesn’t exist on the local network – so Home Assistant never gets the response.
EDIT: NAT is network address translation. So, the router sees the request, translates it before sending it to the DNS provider so that the response will come back to the router instead of where the packet originated. The router then receives the response, and sends it back to where it came from. HA can then properly route it to the correct container.