Cannot connect with local address

I give up and undoubtedly can’t see the forest because of the trees. I turned off the port forwarding, and left the configuration.yaml code to:

server_port: 443
ssl_certificate: /ssl/fullchain.pem
ssl_key: /ssl/privkey.pem

Chrome/Safari give an ERR_SSL_PROTOCOL_ERROR

If I remove the server_port: 443 from the yaml file, then same . I’ve tried 2 port forwarding rules to make sure I’m understanding the verbiage of the host vs forwarding port.

neither have worked. I went so far as to turn on both…

I can get to to HASS, but not without the :8123 port address. I’ve turned off AdGuard, as I don’t think I understand the “rewrite” specifics. I’ve turned on NGINX, with no joy. I know this should be simple, and I’m struggling.

Hold the phone!!! I restarted duckdns add-on. removed the 8123 port range pointed to 443, and I left the 443 port range pointed to 8123 turned on…

And now I can access the HASS remotely via LTE on my iphone, so at least that is working, but on my local network gives me ERR_CONNECTION_REFUSED.

So some progress. It’s something local that I have to figure out, but at least I can hit it remotely.

Crap! Disregard, my browser on my iphone was autotyping the :8123 at the end of the dns

The settings of your router are incomplete. You have forwarded the port 443 to 8123 but there is not an IP defined. In other option of your router your should find the IP to forward and the application (the port configuration that you have already done). It should be something like this:

FROM: 443
TO: 8123 192.168.1.x

Hope it helps.

Thanks for the patience Domaray…Actually, it’s been harder than that. With your help and watching lots of videos on the port forwarding of the AT&T 5268ac, I’ve got it working, but not sure I understand the Why and the implications. This should have been a lot simpler.

Here’s what I have implemented for any other AT&T customers struggling…

  1. On the AT&T 5268ac modem, I deleted the 443 range mapped to 8123, and instead, I just open port 443 and leave the “map to port” blank.
  2. I have my base_url in my configuration.yaml back to…
  3. I have the add-on NGINX running with my duckdns domain name.

And it works inside & remotely. Now why it’s that damn complicated, I don’t know. I also don’t know what security implications that presents for me. I will go back and change my dns name later, but any thoughts on this? Part of me learning Home Assistant, is to actually LEARN it. I’m not sure what I learned here. I always planned to use the Nabu Casa to a) help provide some $$ to the development team if I find the app useful, and b) to take advantage of the Alexa functionality.

This “simple” step has me wondering if this entire process is more trouble than it’s worth. I’m working and will march on, but damn depressed at spending 4 days to get here.

So far, this works

Just an opinion here: There should be an option to bypass the host and peer ssl verifications when the internal link is used. For most of the everyday HomeAssistant users, there’s no real need for performing server verification when you are in your internal network. The fact that you are already connected to your internal network (=you know your Wifi password) rules out any impersonation possibility - unless ok someone sneaked in a second server inside your home! Since the mobile apps have already provisioned to distinguish the internal from the external case the certificate verification for the internal case shall be possible to be overridden.
The solution proposed (=to use the external URL for both cases) requires an DNS lookup. This adds an external dependency that somehow defies the rationale of HomeAssistant for keeping everything locally. E.g. what will happen if you reset your router and your internet is down? Your mobile app will not be able to connect to your hass web server even when you are at home, correct?
If the SSL checks are disabled for the internal link, you can put the private IP of your server instead of the domain name there.This will be routable even when the internet is down.

AdGuard addon? Ngix addon? I’ve HA with docker on my synology and i can’t see addons

I used AdGuard add-on.

Unfortunately, I removed because it gets blocked twice and I couldn’t even accessfrom local network, the GUI was like disabled and I had to load a snapshot.

So, now I use the Android app to access remotely and the web browser for local access

Hello, thanks for the thread. I’m having the same issue.

I’m now just using the app for accessing remotely using the DuckDNS url but I can’t seem to access locally on my browser. May I ask what address you found to work, thanks

Hello from me too.
I have similar problem, i cannot access my HA from HA companion app localy, but can access when on mobile data only.
But weird thing is that i can access localy from browser on my phone and from Ariela app.
What can i do to get it rigt?

Oh drats. I added this to my yaml and now can’t access at all. I have three ports forwarded now 8123, 443, and 80. Now I can’t get in at all.

Hi, hope some one can help please,

On my Android Fire 8 and the official app I could view a live stream of a camera using a Webpage Card with url of (its from Motioneye)

I removed the Home Assistant app to go through onboarding again as I wanted to change the tablet name.

I set the url up as and logged in fine but I just have a blank white box with no camera stream now. Checking the App Configuration the url has changed to my Nabu Casa HTTPS address and not my HTTP local address. The “Internal Connection URL” is grayed out so I cannot set it

On the same tablet using Silk, my internal address shows the Webpage card camera fine but the Nabu Casa url does the same white box. (i’m guessing some HTTP / HTTPS issue ?)

But I only need to connect locally on this tablet, any suggestions please?

Ignore this please, i finally got it to work by editing the address, after about 4 goes it took my changes. Cache issue maybe???

Most home routers don’t work with NAT loopback or Hairpin NAT. So I think the app should work with the local address too by default.

it does work with local address, just not with an invalid cert

Ok so I have to pick one? Either I can make it work locally or remotely but not both? Isn’t really a solution for me. Or is there a workaround to make the cert valid for both addresses ?

Here are some tips if you haven’t already checked them out:

Honestly IMO best solution is to get a router that does NAT loopback and call it a day. Then you only need to use the external URL and the router handles the rest. A good router can fix many other issues as well.

Some users also use a reverse proxy to handle the certificate.

1 Like

there’s definitely something not happy here
here’s my scenario, and i have tried nearly everything I’m running out of ideas.

HA running on Debian/docker ip statically assigned from my pfsense dhcp server
before i setup Duckdns was fine to connect with the app both external ( well kinda, via vpn but over 4g) and internally either via ip or hostname
Setup duckdns,
pfsense has a NAT rule to allow 443 to 8123
that works fine. ( and is like that as amazon developer doesn’t like have https account linking on anything other that 443)
moment that was up an running internally (via wifi) i could no longer talk with the app to the ip or hostname
externally now it works fine to

So in my DNS resolver i setup a host override ( which as i understand it is pretty much split DNS)
so now from a laptop when i ping “” it resolves to the internal IP of my HA instance.
and from a webpage if I goto I get to my HA

In the app, my external address is ( which my nat resolves from 443>8123
my internal url set to and it goes nowhere
i’ve tried ip/localhostname/ none work

In my confi.yaml i setup


ssl_certificate: /ssl/fullchain.pem
ssl_key: /ssl/privkey.pem

so i am running out of ideas…

EDIT, fixed… very much a blonde moment, i have setup selective VPN routing on my pfsense. and well my phone was one of the devices vpn’ing out. and as such has a kill switch to prevent DNS leaks ( which in this instance would have helped me) removed the device from VPN and i’m working again…

Hi all,

I refused using AdGuard because all devices connected to my network depended on it and if Home assistant goes down, I lost access in all devices.

I have another solution that worked for me:

1) Install Duck DNS + SSL as usually.

I added these lines to my configuration.yaml:

  ssl_certificate: /ssl/fullchain.pem
  ssl_key: /ssl/privkey.pem

Router port forwarding:

  • IP: IP of Home Assistant
  • External port: 8123
  • Internal port: 8123

2) Check the access:

Internal: https://myip:8123

Note that both are httpS.

Once everything works, go to step 3:

3) Comment the lines added before:

#  ssl_certificate: /ssl/fullchain.pem
#  ssl_key: /ssl/privkey.pem

4) Restart Home Assistant and access by http://yourip:8123 (no httpS)

5) Install NGINX addon:

On the addon configuration, set as follows:


Notice that I didn’t addedd “https://”

6) Start the NGINX addon.

7) Router configuration:

  • IP: IP of Home Assistant
  • External port: 443
  • Internal port: 443

8) Check the access:

Internal: http://yourip:8123

9) Set the IPs above in your Android APP.

If local access does not work, activate GPS option. The app needs the Location to be able to connect (it doesn’t make sense…)

10) General configuration

I recommend to set the internal and external URL of the Home Assistant configuation (Configuration -> General)
I had some problems when reproducing media and it was solved by this way.

Following these steps, I got access with the Android APP locally and externally. Hope this solve your problems as well.


It works for me !! Thank you very much.

The issues here revolve around the certificate needs Subject Alternative Names (SANs) that match both the external ( and internal IP Address and DNS Address. Port forwarding should be all that is needed.

While “most” browsers will allow the insecure connections with warnings, mobile devices are much less forgiving.

These having to install this and install that, which seems to be the home assistants way, is way beyond the expertise or patience of most users.