Cannot access front-end for Docker container installation via internet IP through port 8123

Hello,

I am running HA as a docker container on Debian. My router has port forwarding enabled. And I have added the iptables firewall entry as recommended on the official troubleshooting guide: Troubleshooting installation problems - Home Assistant. Has anybody had similar issues? Maybe I am configuring things wrong? Details are below:

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:8123

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (2 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.18.0.2           tcp dpt:9001
ACCEPT     tcp  --  anywhere             172.18.0.3           tcp dpt:9000
ACCEPT     tcp  --  anywhere             172.18.0.4           tcp dpt:8765
ACCEPT     tcp  --  anywhere             172.18.0.2           tcp dpt:1883
ACCEPT     tcp  --  anywhere             172.18.0.6           tcp dpt:1880
ACCEPT     tcp  --  anywhere             172.18.0.4           tcp dpt:tproxy

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere       

Can you access the frontend ok on your lan? Did you setup home assistant container to use host networking mode?

For security reasons I generally would not recommend port forwarding 8123 for Home Assistant, and absolutely not for portainer, since portainer has access to your docker sock. Portainer is like your “crown jewels” of your system and anyone that gets access to it has full control of your system.

I run home assistant behind a reverse proxy following this guide

And portainer is accessible on my system only through the wireguard VPN. I wrote a guide to setup wireguard as a docker container here.

1 Like

@mwav3 Thanks for the security tip, I have stopped forwarding these ports. I had an inner feeling this may be a possible risk. Though all my docker commands need sudo. The front end works fine on the internal LAN. I shall give the guides you linked a go. Is it better to run NGINX natively or via docker?

I like running nginx through the Swag container in docker. The Swag container combines nginx with duckdns, fail2ban, and letsencrpyt for ssl certificates.

The setup guide I linked has more info. The toughest part is getting the nginx config files correct.

@mwav3 I am getting somewhere with the swag container in the sense that when I type in my duckdns URL it shows me the router page via HTTPS. This is a welcome change, of course, but I cannot access home assistant or other containers. I think there is some config-ing to do still, but I cannot say the linked instructions for swag are 100% clear.

  1. under http: in config.yaml he states api_password: !secret http_password
    What is this password for? Do I need to specify one in the secrets.yaml file?

  2. The command “id dockeruser” gives me “id: ‘dockeruser’: no such user”

  3. The author says to set up a subdomain, but then updates the post in 2019 saying they are not necessary? Does that mean no subdomains needed in the nginx config files?

  4. He also says to replace the default file nginx/default.conf . In the current swag version there is no default.conf file in this folder. There is one in nginx/sites-conf/default.conf. Is this what I am looking for? The author mentions this file path in the same post, and it is quite confusing. Are these different files or not?

  5. I am asked to edit “fastcgi_pass hostip:9000;” and “proxy_pass http://hostip:XXXX;” but there is no instructions about what exactly should be done. My port 9000 is Portainer. Does this need to be changed to 8123?

  6. The creator of the swag container has a homeassistant config example for the proxy in nginx/proxy-confs . Should this be used anywhere?

I’ll scrutinise the container readme more closely. I wish this set up was as easy as in Homeassistant OS. Looks like a network sysadmin degree is needed to set up all this stuff.

Thanks for suggesting the swag container, btw. I did not know about it.

Actually https is not working - it is just that I made a security exception. Even more work to do.

He’s telling you to remove any https/ssl config from home assistant, since you will be using swag instead, and ssl configured in Home Assistant will create a conflict. Then he’s providing an example of what to remove. So, if you had api password, ssl cert, etc in your homeassistant config.yaml, you need to remove those

You need the pgid and puid of your user in Ubuntu that runs docker commands. So, if my Ubuntu user is mwav3, I would run id mwav3 and it will give me the uid. It’s usually 1000 unless you created it later or have multiple users. You need this for the Swag compose so the config files can be edited, otherwise they belong to root or another user.

I use subdomains. I have other things besides Homeassistant behind the proxy, like node red, I want to use. Using the subdomains allows me to setup multiple proxies with the same ssl cert

Yes

This is for accessing websites built with PHP, which you’re probably not going to use anyway. NGINX doesn’t support PHP natively, however PHP comes with the swag container, but requires additional configuration to work. The swag image is almost like a mini server running within docker, and swag is using port 9000 only within the Swag container to process PHP, and you keep it port 9000. Since you are not mapping port 9000 out of the container (you only map http port 80 and https port 443), there is no conflict with portainer or any other service running on port 9000 on the host OS.

I believe his config for PHP is wrong here anyway, as you don’t change the IP to your address of the machine, and keep it the 127.0.0.1 (which is a loopback address to the swag container itself), so the line should be

fastcgi_pass 127.0.0.1:9000;

I never had luck getting that to work. The site configs are meant to primarily be used to reference other commonly used docker images running on the same bridged docker network. This works great if you have multiple containers running in bridge mode Bridge network driver | Docker Docs on the same docker network. It fails however when you have a container running in Host networking mode Host network driver | Docker Docs like Home Assistant. The reason is that on a docker bridge network, you can reference other containers in the docker DNS by their container name, but you cannot reference a container by name if it is running in Host networking mode. This is because the containers in Host networking mode, like Home Assistant, are actually separated from your docker network. Home Assistant also needs host networking mode for auto discovery of integrations to work properly, so you can’t install it on the bridge. There is a post here from someone who put together a guide that uses that subdomain file, but the last poster I was helping there has still not been able to get it working properly - you can read more about that config here if it is a route you want to try - Remote access with Docker . I get confused about using that setup because now your nginx config is spread out across different files that reference each other. I find it easier to just put the whole nginx config for everything in the default.conf file, the way Juan’s guide mentioned.

When starting out this reverse proxy was one of the hardest things for me to setup. The container install is definitely more complicated then the HAOS install with addons, but provides the most flexibility to install and use the machine for other none home assistant software. Once you get more comfortable with docker and get this going, it will be worth it in the end. This is my nginx config below from the default.conf file, which I’m sure isn’t perfect, but everything has worked well so far with it for me. It is configured to use zwavejsui, zigbee2mqtt, nodered, and home assistant behind the proxy. You can delete or comment out the sections you don’t need. Be sure to replace “yourdomain” with your duckdns domain name, and “192.youriphere” with the IP number of the machine running docker/homeassistant on your LAN. This config should put your homeassistant instance up at https://homeassistant.yourdomain.duckdns.org. If you are still running into trouble try posting your docker compose and any log references that come up for swag when it starts up.

## Version 2020/05/23 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/default

# redirect all traffic to https
server {
	listen 80 default_server;
	listen [::]:80 default_server;
	server_name yourdomain.duckdns.org;
	return 301 https://$host$request_uri;
}

# main server block
server {
	listen 443 ssl http2 default_server;
	listen [::]:443 ssl http2 default_server;

	root /config/www;
	index index.html index.htm index.php;

	server_name yourdomain.duckdns.org;
	
	# enable subfolder method reverse proxy confs
	include /config/nginx/proxy-confs/*.subfolder.conf;

	# all ssl related config moved to ssl.conf
	include /config/nginx/ssl.conf;

	# enable for ldap auth
	#include /config/nginx/ldap.conf;

	# enable for Authelia
	#include /config/nginx/authelia-server.conf;

	# enable for geo blocking
	# See /config/nginx/geoip2.conf for more information.
	#if ($allowed_country = no) {
	#return 444;
	#}

	client_max_body_size 0;

	location / {
		try_files $uri $uri/ /index.html /index.php?$args =404;
	}

	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_index index.php;
		include /etc/nginx/fastcgi_params;
	}

# sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp"
# notice this is within the same server block as the base
# don't forget to generate the .htpasswd file as described on docker hub
#	location ^~ /cp {
#		auth_basic "Restricted";
#		auth_basic_user_file /config/nginx/.htpasswd;
#		include /config/nginx/proxy.conf;
#		proxy_pass http://192.168.1.50:5050/cp;
#	}

}

# sample reverse proxy config without url base, but as a subdomain "cp", ip and port same as above
# notice this is a new server block, you need a new server block for each subdomain
#server {
#	listen 443 ssl http2;
#	listen [::]:443 ssl http2;
#
#	root /config/www;
#	index index.html index.htm index.php;
#
#	server_name cp.*;
#
#	include /config/nginx/ssl.conf;
#
#	client_max_body_size 0;
#
#	location / {
#		auth_basic "Restricted";
#		auth_basic_user_file /config/nginx/.htpasswd;
#		include /config/nginx/proxy.conf;
#		proxy_pass http://192.168.1.50:5050;
#	}
#}

# sample reverse proxy config for "heimdall" via subdomain, with ldap authentication
# ldap-auth container has to be running and the /config/nginx/ldap.conf file should be filled with ldap info
# notice this is a new server block, you need a new server block for each subdomain
#server {
#	listen 443 ssl http2;
#	listen [::]:443 ssl http2;
#
#	root /config/www;
#	index index.html index.htm index.php;
#
#	server_name heimdall.*;
#
#	include /config/nginx/ssl.conf;
#
#	include /config/nginx/ldap.conf;
#
#	client_max_body_size 0;
#
#	location / {
#		# the next two lines will enable ldap auth along with the included ldap.conf in the server block
#		auth_request /auth;
#		error_page 401 =200 /ldaplogin;
#
#		include /config/nginx/proxy.conf;
#		resolver 127.0.0.11 valid=30s;
#		set $upstream_app heimdall;
#		set $upstream_port 443;
#		set $upstream_proto https;
#		proxy_pass $upstream_proto://$upstream_app:$upstream_port;
#	}
#}

# sample reverse proxy config for "heimdall" via subdomain, with Authelia
# Authelia container has to be running in the same user defined bridge network, with container name "authelia", and with 'path: "authelia"' set in its configuration.yml
# notice this is a new server block, you need a new server block for each subdomain
#server {
#	listen 443 ssl http2;
#	listen [::]:443 ssl http2;
#
#	root /config/www;
#	index index.html index.htm index.php;
#
#	server_name heimdall.*;
#
#	include /config/nginx/ssl.conf;
#
#	include /config/nginx/authelia-server.conf;
#
#	client_max_body_size 0;
#
#	location / {
#		# the next line will enable Authelia along with the included authelia-server.conf in the server block
#		include /config/nginx/authelia-location.conf;
#
#		include /config/nginx/proxy.conf;
#		resolver 127.0.0.11 valid=30s;
#		set $upstream_app heimdall;
#		set $upstream_port 443;
#		set $upstream_proto https;
#		proxy_pass $upstream_proto://$upstream_app:$upstream_port;
#	}
#}

################################################################################
### SUBDOMAIN 1a Node Red Admin#################################################
server {
	listen 443 ssl;

	root /config/www;
	index index.html index.htm index.php;

	server_name red.yourdomain.duckdns.org;
	
	include /config/nginx/ssl.conf;

	client_max_body_size 0;

	location / {
#		auth_basic "Restricted";
#		auth_basic_user_file /config/nginx/.htpasswd;
		include /config/nginx/proxy.conf;
		proxy_pass http://192.putyouriphere:1880;
	}
}

################################################################################
### SUBDOMAIN 1b Node Red Endpoints##############################################
server {
	listen 443 ssl;

	root /config/www;
	index index.html index.htm index.php;

	server_name redend.yourdomain.duckdns.org;
	
	include /config/nginx/ssl.conf;

	client_max_body_size 0;

	location / {
#		auth_basic "Restricted";
#		auth_basic_user_file /config/nginx/.htpasswd;
		include /config/nginx/proxy.conf;
		proxy_pass http://192.putyouriphere:1880/endpoint/;
	}
}



################################################################################
### SUBDOMAIN 2 Zwave JS########################################################
server {
	listen 443 ssl;

	root /config/www;
	index index.html index.htm index.php;

	server_name zwave.yourduckdns.duckdns.org;
	
	include /config/nginx/ssl.conf;

	client_max_body_size 0;

	location / {
		auth_basic "Restricted";
		auth_basic_user_file /config/nginx/.htpasswd;
		include /config/nginx/proxy.conf;
		proxy_pass https://192.putyouriphere:8091;
	}
}

################################################################################
### SUBDOMAIN 3 ZigbeeMQTT########################################################
server {
	listen 443 ssl;

	root /config/www;
	index index.html index.htm index.php;

	server_name zigbee.yourduckdns.duckdns.org;
	
	include /config/nginx/ssl.conf;

	client_max_body_size 0;

	location / {
		auth_basic "Restricted";
		auth_basic_user_file /config/nginx/.htpasswd;
		include /config/nginx/proxy.conf;
		proxy_pass http://192.putyouriphere:8086;
	}
}

### HOMEASSISTANT ##############################################################
server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name homeassistant.*;
    
    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app homeassistant;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass http://192.putyouriphere:8123;

    }

    location /api/websocket {
        resolver 127.0.0.11 valid=30s;
        set $upstream_app homeassistant;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass http://192.putyouriphere:8123;

        proxy_set_header Host $host;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
# enable subdomain method reverse proxy confs
include /config/nginx/proxy-confs/*.subdomain.conf;
# enable proxy cache for auth
proxy_cache_path cache/ keys_zone=auth_cache:10m;

Thank you for your help @mwav3. I still did not get it to work, but found that using Wireguard was an alternative means of access. After taking taking along break and working out the set-up nuances of Wireguard, it is working perfectly for my needs. Anyone struggling with reverse proxies, like I did shoudl check this option out.

I had the same problem, could not connect remotely to HA server after vpn connection when I am away from home. There is no problem when I am home. HA was setup with docker compose. Wireguard, Twingate or OpenVPN, none of them works. After many tests for a long time, I finally got HA working OpenVpn now. “Network_Mode: Host” was initially in docker-compose.yaml. I replaced it with “ports: -8123:8123” and problem solved! “network_mode host” may have extra settings to prevent vpn from working.

Edit:
Using “ports” instead of “network_mode” did break mqtt integration. Not a good solution.

Edit2:
Was able to find out root cause, 8123 was restricted within a subnet. Removing restriction resolved the issue. Remote access HA through vpn is working now.

hey @mwav3 does this work? I tried your default.conf and changed it to my domain etc but it didnt work for me

Yes I’m still using it, the one commented above is for zigbee2mqtt external access - I have NGINX configured for “extra” auth in addition to the auth already required for zigbee2mqtt though. To get that to work , you need to setup and use a password file

If you don’t want the extra auth, you can # comment out the auth_basic lines or remove them.

Are you trying to get this to work for zigbee2mqtt or for Home Assistant itself?

TLDR; Note that you must allow websocket to pass in order to run HA remotely.

Yes, I agree, and my config for home assistant posted above does enable websockets, but I don’t have websockets for zigbee2mqtt. My config posted above is for subdomains to access a bunch of different programs. For Home Assistant specifically it would be this section:

### HOMEASSISTANT ##############################################################
server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name homeassistant.*;
    
    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app homeassistant;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass http://192.putyouriphere:8123;

    }

    location /api/websocket {
        resolver 127.0.0.11 valid=30s;
        set $upstream_app homeassistant;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass http://192.putyouriphere:8123;

        proxy_set_header Host $host;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}