Issue somewhere within reverse proxy, trusted_proxies, and DNS

I have my network set up to route all DNS queries (anything on port 53 from any local IP address on any subnet) to my AdGuard instance. When I’m working on that server, I disable the rule temporarily so that DNS requests go through the normal DHCP-assigned DNS server.

However, I noticed that when I disable that rule, I can no longer access my HA instance from within my network using the external URL; I get a connection refused error (ERR_CONNECTION_REFUSED). Instead, I have to use the local IP address to access HA.

I am using NGINX (actually the SWAG image) as a reverse proxy, and I have many services I am reverse-proxying. The HA instance is the only one which can’t be accessed locally via the external URL when I disable my DNS rule. My suspicion is that it has something to do with the trusted_proxies setting, but I have it set up where I don’t think there should be any issue:

  use_x_forwarded_for: true

My network setup is:

I couldn’t see anything in my HA logs that said it was refusing any connection (but maybe I don’t know where to look?). There’s also nothing in my NGINX logs showing any issues.

I can access all the proxied services, including HA, from outside my network with no issues, whether or not the DNS forwarding rule is enabled or disabled (as would be expected).

Making progress but haven’t quite figured out the solution just yet. Apparently this is a hairpin NAT issue, and has nothing to do with home assistant.

I have confirmed that, when I am outside my network and connecting into it via VPN, my DNS requests are seen and resolved by my AdGuard DNS server and those logs show my device’s internal IP address, as expected & desired. However my device connects to the external IP address of my server and then the source address gets re-written so that my servers see that I am connecting from my device’s external IP address instead of the internal IP address. Probably better explained by the reddit post:

For the sake of explanation let’s say the NGINX server is at, the VPN client is at and the public IP is So the packets come in from (VPN) addressed to (external IP). The destination has to be changed to so they actually go to the NGINX server. But the problem is that when the NGINX server sends reply packets, it can’t just send the reply directly to, because the client on the VPN is expecting a packet from (the address it connected to), not This is where hairpin NAT comes in - it detects that it has redirected a packet that originated “inside” addressed to the outside address. To make sure that the return packet can be properly translated, it also rewrites the source address to the outside address - that way it guarantees that it will see the reply packets so it can change the address and send them onwards.

The two solutions offered are:

  1. Make my DNS server redirect requests to internal addresses instead. In think in AdGuard that would be done with a DNS rewrite. The downside that I see is that any time I create a new proxy host in NGINX proxy manager, I’d also have to create a matching DNS rewrite in AdGuard. That seems less than ideal.
  2. Update the access control lists to allow for my external IP. This would be in NGINX where I have some sites only allowing local access, and then in my HA config I’d have to add my external IP in the the trusted_proxies list. And then change it in both places every time my external IP changes.

I’m suspecting others have either solved this some other way, or don’t have this problem due to some difference in their setup. I’d be interested to know if others have this problem or not.


I got this figured out, if anyone else runs into this.

The short story is that if you want to use the same website (FQDN) to access stuff on your network, whether you are inside or outside your network, then you have two good options:

  1. Run your own DNS server (e.g. AdGuard or Pihole) and do a DNS rewrite so that your domain resolves to the internal IP of your reverse proxy. This only works if your reverse proxy is on ports 80 and 443 since DNS is oblivious to ports.
  2. Configure your router to do hairpin NAT, back to your reverse proxy. You should be able to translate ports with this option if you want.

I opted for option 1, and had to set up my reverse proxy docker container on a macvlan so that I could expose ports 80 and 443 since they were already occupied on the docker host. After doing that, creating the DNS rewrite for * within AdGuard took no more than 15 seconds. No extra work is required whenever I add additional subdomains.

Now when I use from inside my network or when connected to my network via VPN, my request resolves AND never leaves my network.