Hass.io + Configurator + NGINX + Let's Encrypt = Problem

I got it working with Node-red too. You have to tell node-red what the subfolder is in its config file otherwise you get a
“Cannot GET /nodered/” error.
Confusingly node-red is on the same virtual IP address as Configurator so now I’m not quite sure how the containers are configured in hass.io.
Remember the 172.30.32.1 is a VIRTUAL IP address for the virtual network running inside hass.io. You should use this address regardless of your local IP address range (192.168.x.x) - it is separate.

I have installed Hassio on raspberry pi 3b+
Hassio on docker.
I would like to use grafana from homeassistant, without opening port on router for grafana which is 3000.
Is it possible to reach grafana in homeassistant without forwarding port on router, from internet.
Please guide

Hi Kirpat.
Yes, very probably but how you do it depends on what Grafana supports.
Some webapps can run from a subfolder (mydomain.duckdns.org/grafana/) and some can’t. If not, you would need to create a new domain so grafana can run from the root (mygrafanadomain.duckdns.org/).

Configurator runs happily for a subfolder or from the root. Node-red needs a configuration setting in its settings.js file (httpNodeRoot) to tell it it is running from a subfolder.
Some webapps seem to need weird protocol translation and/or forwarding by Nginx and I don’t understand that stuff.

I don’t know what grafana supports but, either way, you need to know the IP address of your Grafana host and whether the host is serving up HTTP or HTTPS (if it’s HTTP don’t worry - Nginx will encrypt it for you on the Internet).
Is Grafana running on a separate box or in a Hass.io docker container? what URL do you use to access Grafana on your internal network?

Nick.

Grafana is working in Hassio as addon, and docker ip for grafana is 172.30.33.4
I have embedded grafana in homeassistant with my localip.
I can directly access grafana with https://192.168.1.2:3000 which is my local up.
But I can’t access it from internet i.e. with my domain.
If you guide I am ready to have trial and error.
One more thing I have forwarded port 8123 on my router for homeassistant.
Port 443 is reserved for my orangepi
Thanks for ur reply

TL;DR - trying to use sub folders with NGINX (my.domain/server) and NOT subdomains (server.my.domain). Always get a 502 Bad Gateway no matter what server/service I’m trying to proxy to whether it is a separate machine like my router, or another addon on the same machine. Config file below.

I’m having issues with NGINX as well. I will start with the fact that I DO NOT want to use Subdomains. I have a limited number of available domain names through my provider that will not be enough for what I need and I do not want to switch. I have had a longstanding ddns through no-ip that I have been using over the years for access to my camera system and a couple other items on my network using port forwarding rules. When i setup hassio a year or so ago I forwarded 8123 to my RasPi and called it a day.

I recently switched to a linux VM with Docker and installed hassio in the docker using the generic linux install method so that I have a little more horsepower behind my hassio to process video feeds, etc from my cameras. Kept the port forwarding until the Google Assistant integration was broken and now requires SSL to work proper. I implemented the ssl options under the

http:

header in my config.yaml, used the Let’s Encrypt addon to obtain my certs, and all was good in my home again.

Now that the backstory is out of the way, I can move on to my current issue. I keep seeing peoples mention of using NGINX as a revers proxy for increased security for their hassio installs, but also to reduce the port forwarding required in ones router for other services. I decided I liked the idea and started researching NGINX. I installed the NGINX SSL Proxy addon and input my domain in the config. I forwarded 443 to 443 on my hassio VM and it started working. Good for me, on to expanding my setup to include subfolders, so I change my customize > “active” option to true, then setup a nginx_proxy_default.conf file in my /share folder. In that .conf file I decided to start with an easy forward that shouldn’t require too much, my router. I entered :

location /router/ {
rewrite /router/(.*) /$1 break;
proxy_pass http://192.168.1.1;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for;
proxy_set_header_ Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
}

I expect when I enter

https://my.domain/router/

that it would take me to my router login page, but no such luck. It instead takes me to a “502 Bad Gateway” page with “nginx/1.14.0” below the Bad Gateway text. The log for NGINX says something about an invalid header and I know netgear routers have an issue currently about invalid null characters in the header and no longer work in Chrome; so assuming that is the issue, I also tried using similar settings for the Configurator addon and the IDE addon, but using http:127.0.0.1 for the proxy_pass since they are hosted on the same machine. Both of which take me to the same “502 Bad Gateway” page, but with a different error (same error for all the addons, but different from the router).

[error] 21#21: *64 connect() failed (111: Connection refused) while connecting to upstream, client: ::ffff:xx.yy.zz.vv, server: my.domain, request: “GET /ide/ HTTP/2.0”, upstream: “http://127.0.0.1:8321/”, host: “my.domain”

^ Where xx.yy.zz.vv is the public internet IP of the computer I’m trying to access from, my.domain is the domain name of my home, and /ide/ is the subfolder name (location /ide/ in my .conf file) used in NGINX.

Also, which is coming to the last straw, I can no longer access anything on my home network that requires forwarding, such as my security cameras or my OctoPi setup. The NGINX server sucks up all incoming https traffic and prevents it from passing by.

Does your provider actually limit sub domains??? That would be incredibly unusual.

127.0.0.1 is usually the IP for localhost. If these are in Docker containers then they may not be the same localhost. Try using the IP of the actual host machine they’re running on. Unless I’m reading that nginx log wrong. I use Caddy so not entirely sure.

Thanks for the replies.

My provider limits me to 3 total subdomains, but that is probably because I am on their free tier.

I have tried using 127.0.0.1, localhost, the IP of my host (192.168.1.xxx), and the docker IP of the container for the IDE addon. using the docker IP gets me a 404 Not Found page and no entry in the NGINX log. using 127.0.0.1, localhost, and my host IP all return the 502 Bad Gateway

Same happened to me, and I stopped struggling.
If you get this working please let us know so that we can make it work on our system too.

I’m pretty much at the point of scrapping NGINX and just going back to port forwarding. I know its not as secure, but right now I have lost access to view my cameras remotely and remote manage some other devices on my network. The only benefit NGINX is providing currently is to allow me to locally connect to my homeassistant using 192.168.1.xx as opposed to having to always use the domain like I have to when setting up the ssl in the HTTP section of my config.

So I have made a very minor amount of progress. I installed PiHole (Home Assistant Community Add-on: Pi-hole) and set it up in my nginx_proxy_default.conf file as such:

location /pihole/ {
rewrite /pihole/(.*) /$1 break;
proxy_pass https://localhost:4865/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_set_header_ Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
}

and my pihole addon config as:

{
  "log_level": "info",
  "password": "password",
  "update_lists_on_start": false,
  "admin_port": 4865,
  "dns_port": 53,
  "ssl": true,
  "certfile": "fullchain.pem",
  "keyfile": "privkey.pem",
  "interface": "",
  "ipv6": true,
  "ipv4_address": "",
  "ipv6_address": "",
  "virtual_host": "my.domain.net",
  "hosts": []
}

When i go to my.domain.net/pihole/ it now redirects me to my pihole admin page. The biggest thing that stands out to me is the second to last line in the pihole config

"virtual_host": "my.domain.net"

where I enter my domain between the quotations. I have yet to get past the 502 Bad Gateway on any of my other servers/services.

Did u get any clue

I messed around with the other settings, but have yet to make it work. My router will never work because of the null headers, and Netgear has brushed it off as an End-of-Life device and won’t publish an update to fix this small thing that is a major flaw.
I believe the other services need something like the virtual host option. the couple add-ons that reference it seem to state that it’s necessary when running in a docker container. I don’t know if it’s a hidden option in other configs, or if their developers need to add it in.

I had the same issue getting the configurator to proxy correctly, below is how I solved it in the nginx config file. If you are still getting “No auth header received” errors after reloading nginx that is due to the browser not asking for new auth after the change. You can force it by including http://user@domain/configurator/ and it should pop up a new basic auth window.

location /configurator {
  return 301 https://example.tld/configurator/;
}

location ~ /configurator/(?<path>.*) {
  proxy_pass http://hassio.local:3218/$path$is_args$args;
  proxy_set_header Host $host;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_http_version 1.1;
  proxy_pass_request_headers on;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}
1 Like

Dear, here above u have mention path for location as well as url,
Please explain the need and what should be the path when I am on Hassio on rpi raspbian?

Hi all,

It took me a very long time to figure out how to make grafana works in hassio as a subfolder of the main domain.

The solution was to pass environment variable to Grafana so it knows it is in a subfolder:

  "env_vars": [
    {
      "name": "GF_SERVER_DOMAIN",
      "value": "your domain"
    },
    {
      "name": "GF_SERVER_ROOT_URL",
      "value": "%(protocol)s://%(domain)s:/grafana"
    }
  ],

All details can be found in these links:
Grafana configuration with sub path

An example for the hassio addon

1 Like

i am getting this when i open xxxx.duckdns.org/grafana

If you’re seeing this Grafana has failed to load its application files

  1. This could be caused by your reverse proxy settings.

  2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath

  3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build

  4. Sometimes restarting grafana-server can help

Hi Kirpat

In case you are still looking for a solution. There is a new option in the latest Grafana that works better.

Add these options:

    {
      "name": "GF_SERVER_DOMAIN",
      "value": "xxxx.duckdns.org"
    },
    {
      "name": "GF_SERVER_ROOT_URL",
      "value": "%(protocol)s://%(domain)s/grafana/"
    },
    {
      "name": "GF_SERVER_SERVE_FROM_SUB_PATH",
      "value": "true"
    }

the configuration for nginx is then very simple you can find it in the documentation here

1 Like

Thanks, that really worked,

Hi,

here grafana do not work via reverse proxy from outside my lan. I only get:

If you’re seeing this Grafana has failed to load its application files

  1. This could be caused by your reverse proxy settings.

  2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath

  3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build

  4. Sometimes restarting grafana-server can help

… and nothing helps. Any hint for me?

Steffen