Router to do NAT hairpinning

Ok, but my DNS is my PiHole machine.
In image I’ve put all my internal machines plus the myha.synology.me

And the client is not happy at all by the presented HTTPS certificate.

One step further, I did a nginx config to rewrite what I thoiught was needed, I can access the login page, but the 2FA is killing my effort. I’m mad! I get the code window, I put the code, then this url
https://myha.synology.me/lovelace?auth_callback=1&code=… is giving this result
image

nginx config

server {
    listen 443 ssl;
    server_name myha.synology.me;

    ssl_certificate /etc/letsencrypt/live/myha.synology.me/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myha.synology.me/privkey.pem;

    location / {
        proxy_pass https://192.168.1.2:8123;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Disable buffering
        proxy_buffering off;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

Forgot the websockets.

    location /api/websocket {
        resolver 127.0.0.11 valid=30s;
        set $upstream_app hatest;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass http://10.0.0.52:8123;

        proxy_set_header Host $host;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

You just saved my life!
I’ll mark your post as solution but not without a huge thank you to @123, @jerrm, @aceindy for a software solution and @complex1 because I’ll probably go for a Mikrotik router as my hardware solution with a lot of readings to do thanks to @123 again.

Thank you everyone for your support in my NAT journey.

Trying to keep things vanilla and reproducible, I just tested the duckdns addon letsencrypt certs on three different HA installs at three different locations/LANs, across at least four browsers, linux and windows.

A local dns fixup worked with all, no browser complaints, just as expected.

I don’t know why the dns option doesn’t work for you. Something else is going on.

1 Like

I have no idea where you’ve gone wrong, but I assure you that it does, in fact, work. I’ve seen me do it.

1 Like

Does your DNS forward to the HA server or NGINX(proxy)?

Since the cert comes from the proxy your DNS must point to proxy for browser to be happy and receive cert

HA and nginx are on the same machine, the nginx config was not working without telling me, and at the end, it was not correctly rewritting the packets.

Now it does, with the help of ChatGPT (who knew?) for port 443 part and @francisp for the websockets.

But I don’t like this DNS/proxy solution, I’d prefer the NAT loopback. It will be like so when I’ll have a correct router instead of that locked ISP box.

Why? Result and route is same i expect.

Local dns is better as well since local dns may allow you to add for external domains.

All decent router/firewall exposing an internal device to internet are doing proper NAT and reverse NAT, therefore, I’ll have no use of an additional DNS + Proxy. The less devices/software, the better, my opinion.

I mean, if I can just define the port forwarding and nothing else, it is much easier than maintain a local DNS and a reverse proxy.

I have no idea why you think you need a proxy to use local DNS. You do not.

1 Like

Without nginx, 192.168.1.2 replying with a certificate for myha.synology.me is triggering an alert.

Then you did something wrong.

For who is interrested, and I’ll be happy to help who wants to do the same, here is the nginx solution
image

Compared to a NAT loopback (or NAT hairpinning)
image

Attitude aside, there were several people who tried to tell you. But instead of providing more detail about your configuration so that we could actually troubleshoot, you became argumentative, and convinced that your way was the right way, and we are all wrong.

Both of those assertions are false.

If you figured out how to hack together a way to make it work, that’s fine, I’m glad you’re happy.

But don’t think that anyone should take your education on the topic…

Don’t make me laugh, I’ve described my configuration multiple times (I’m counting 5 in addition to the initial post where I gave all the details).
I’ve also explained multiple times why the certificate was not working with a local address.

You’re continuing to tell me again and again that I’m doing everything wrong without giving me the reasons and how to solve my certificate issue.
That did not help at all.

At least, some people were explaining what was wrong with my initial thinking with link to resources to educate me and told me that it was not enough to define the DNS to HA but that something was needed in between to make SSL, https and websockets to work.

If you’re sure that my configuration is useless or at least unnecessary, looking at my drawing and all my explanations, then feel free to do a complete description of what is the minimum requirements and the setup to achieve what I’m asking in my initial post : using one url everywhere, inside and outside, in https. And I’ll be more than happy to give it a try and recognize that you were right.

sorry to say that your drawing is very confusing….

1 Like

Sorry to hear that.
It is the result of the long thread and I understand that by itself it is not that clear.

The first drawing is explaining that an internal browser will query the local DNS, that will direct to nginx, that will redirect to HA (doing the necessary NAT translations), on the other hand a browser on internet will access HA directly through the router and the port forwarding mechanism

The second drawing is explaining that with NAT loopback, no need for the local DNS/nginx configuration, everyone, internal and external is using the port forwarding of the router and the internal browser is doing the so called NAT loopback (but this is not always possible, depending on the router).

This line of thinking is why you have not received the help you thought you would.

You THINK you have done those things, when in point of fact, you have not.

No, nothing is needed aside from properly configuring a certificate, and then properly configuring DNS, I assure you. As I said, I’ve seen me do it - literally thousands of times. Quite possibly, tens of thousands of times.

But that’s fine - you know better, and would rather argue rather than step back and admit that you were on the wrong track.

Best of luck to you, it’s certainly not going to get any easier with that attitude.

Before i used hairpin loopback nat, i used a dns solution on my EdgeRouterX.

The only difference between dns solution and hairpin loopback is that on my local network i needed to add :8123 on my local address…

How?
On my router:

  • DNS configured to use my domain name synology.me as FQDN locally
  • DHCP reservation for myHA at 192.168.1.2

So when querying myha.synology.me locally it will resolve into 192.168.1.2, but since ha is listening on 8123 and there is nothing in between my local client and ha, I need to use https://myha.synology.me:8123.

Now on the outside, i forward port 443 to 192.168.1.2:8123, meaning i can just use https://myha.synology.me/ without 8123.

And just for the fun of it i decided to get rid of :8123, so i implemented hairpin loopback nat…
But still not sure if i am going to keep it, as it created some issues when using vpn and i also feel it is slightly slower :thinking: (and if I decide to go back, I’ll probably stop forwarding port 8123 to 443, but just stick on https://myha.synology.me:8123 for both internal and external :face_with_hand_over_mouth:)

But i think i see where yours goes wrong, as you stated:

meaning, your dns resolves your external dns, and not your local FQDN :wink:

On my local network, all dhcp clients will get automatically the dns suffix synology.me.
And that domain happens to be the same as registered external😁
So basically, my local DNS and external DNS use the same name, but return totally different IP addresses :thinking:
no ngix, no pihole, no hairpin loopback and working fine…