you don’t need 2 NGINX instances. You only need ONE.
Sorry so the additional changes is to the built in NGINX on the Synology?
The changes need to be made wherever you want to use the proxy.
Think about this.
You have a reverse proxy, that is proxying all requests back to a backend endpoint. You need all the settings to be correct on your entry point which is your proxy, in order for it to pass through the data.
If you try to stick another reverse proxy behind a reverse proxy, you still are not passing the data through to the client correctly.
Yes, any changes you make need to be made on the instance that is actually performing the reverse proxy.
I followed the VI editing on the Synology guide (Link above)
So you are saying that I need to make additional changes in another file on NGNIX on the server?
Can you point me in the right direction?
The Synology UI for the proxi is rather limited
If you want to add in this ability to your NGINX instance, yes.
I know nothing about Synology. I have always built my own NAS boxes.
For the latest Synology DSM update 6.2 (6.2.1) its not needed to modify the Portal.mustache file anymore
New option in the reveres proxy rules.
I tried to find this menu on my DSM 212j (6.2.2xxx), but I cant find it. Could you please point me in the right direction?
Control Panel --> Application Portal --> Reverse Proxy Then edit rule and choose Custom Header
If you are on 6.2-23739-2 its to old. Must have 6.2.1-23824. So 6.2-build 2xxx is older than v 6.2.1.
Can download latest here:
https://www.synology.com/en-global/support/download/DS212j#utilities
Dont think 6.2.1 is available for download in all regions yet in the control panel. So download latest version and install manually.
In my setup I had to manually add the “proxy_read_timeout 86400;” line to Portal.mustache to get Home Assistant up and running correctly.
Thanks. Running 6.2.1-23824 now with the custom header rules! I haven`t added the timeout to get HA running, but we will see what happens
After, the new, update 1 I had connection problems with https-connections (even within my LAN). Removing the manually added proxy_read_timeout line (see two posts above) did solve this problem. HA seems running perfectly now.
You cant remove the proxy_read_timeout on the synology but you can adjust the timeout from 60 to 120?
I’ve edit my original post. The proxy_read_timeout I was referring was the one in the portal.mustache file, what I added before the 6.2.1 release.
I use my Synology a long time as a reverse proxy. I would like to run Hassio on a NUC. What about the the certificates used in Hassio? They are installed on my Synology instead on the NUC, so I can use them with my reverse proxy.
Do I have to export the certificates on my Synology and place them in my /ssl/ directory? So I can use them in configuration.yaml (http section) and in the add-on’s (fullchain.pem and privkey.pem).
Could someone guide me here?
I just point it to my Raspberry Pi running Hassio that works fine
Hassio doesn’t have to use certificates at all. Let your reverse proxy handle the certs.
Use your reverse proxy.
Oké, thanks.
Boom! Thanks this was my issue (working now), I had been trying for days to get this to work! Anyone else with Synology NAS this issue the issue you might be having.
@casperse
Can you explain how do you do it? I created a new second certificate (LetsEncrypt) for my subdomain
hassio.xxxxxx.duckdns.org in diskstation and create a reverse proxy rule + websocket entry (see image). But if I go to https://hassio.xxxxxx.duckdns.org I get a “NET::ERR_CERT_COMMON_NAME_INVALID” from chrome and after ignore it I see the login page from diskstation …
EDIT:
Now the connection works. My mistake was a old wrong port forwarding rule in my fritzbox for DS (443 -> 5001) and I always saw the 443 on the overview page. So I changed it to 443 -> 443. I add a new reverse proxy rule in DS: external 443 to 5001 (DS).
BUT I get still the cert error message
- Certificate: xxxxx.duckdns.org -> for my Diskstation
- Certificate: hassio.xxxxx.duckdns.org -> for my HA on rPI
EDIT2:
Ok I got it. See picture.