So, you freely admit you don’t know, but continue to make declarative, absolute statements.
You’re right, my 30 years of doing networks pales in comparison to your expertise.
Another one for the list, I guess.
So, you freely admit you don’t know, but continue to make declarative, absolute statements.
You’re right, my 30 years of doing networks pales in comparison to your expertise.
Another one for the list, I guess.
Yes, please, I’ve done the same!
Does not work here. I think I have docker installation (VM inside Debian host).
I use SSH addon and there I am logged as root and my home directory is /root/homeassistan, I put both certs under that directory and set path in config:
http:
ssl_certificate: /config/fullchain.pem
ssl_key: /config/privkey.pem
When I try to use https I get response: “This site can’t provide a secure connection” and this is in the HA log:
[core-ssh homeassistant]$ tail -f home-assistant.log
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 350, in data_received
messages, upgraded, tail = self._request_parser.feed_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
aiohttp.http_exceptions.BadStatusLine: 400, message:
Invalid method encountered:
b'\x16\x03\x01\x06\xd6\x01'
@kgolding Kevin thanks so much for doing this.
Quick question
When generating the required certificates do you enter 192.x.x.x:8123 or just 192.x.x.x
Thanks!
Neither. Certificates are issued to FQDNs, not ip addresses.
That is not true.
You can do a CSR with FQDN AND/OR IPs (IPv4 and IPv6)
Ref: Using an IP Address in an SSL Certificate - GeoCerts
But yeah indeed, Lets Encrypt wont let you add 192.168.1.1…
Addendum: ahh, its you again. The one with the impressive knowledge. Im out
By the way. My self signed certs are still working like a charm. Android, IOS, HA Assist, …
It’s a kind of Magic <3
EDIT: To add something producive to this video link. Under normal circumstances, depending on your SOHO router, this won’t work. As this is up to the router itself if it’s allowing you to reach your natted IP (External → Internal) via the External IP.
If you stumble with this, the magic keywords are “Reflection and Hairpin NAT”
No it is not.
Op asked
Yes, you can.
I’m using for that adguard and nginx proxy. The bottom line is this.
Buy a domain, it cost around 10 - 20 us per year, use nginx to add cert to your domain and subdomains and use adguard to forward traffic to it.
And that is all there is. Your local https on everything that can be served over https.
Edit:
Why to buy a domain? Well I’m using ha in docker. Every addon like ie. zigbee2mqtt have different port on the same ip. With free domains you can get up to 4 subdomains. If you buy a domain than it is, I don’t know, but way much more than you can use it.
Do you even know what Reflection NAT is?
And when to use it?
Let me give you an example.
Imagine you have the FQDN “homeassistant.u-r-not-the-sharpest-tool.in-the.box” ,which definitely fits here, the underlying IP address which point to this is 123.123.123.123, but in fact the NAT IP is 192.168.178.254
When you now try connect from Internal Subnet 192.168.178.0/24 to this, some routers forbid that. Because it would give you the Internal IP back and not the external one! And imagine what happens then with the usage of the fancy domain you pay for and the certificate?
To circumvent the technique would be Reflection and Hairpin NAT.
Examples:
So, yes it is!
Sometimes it is better not to answer and blame others for obviosuly own lack of knowledge, though.
And again. A public domain and certificates from a public CA are not needed to make this work.
Blame others? Ok. I don’t have public ip as it is not updated nor do I access ha in local network over public ip. I just said that it is possible to do it. I don’t think this is nat hairpining but ok, I might be wrong.
If you do not have a public IP address, how are you communicating with us? Must be smoke signals then
Furthermore the OP asked how to connect “locally” using HTTPS. Obviously with a Domain registered to you and using of a public certificate you have to connect to your external IP address, because if you connect to the NAT IP address (aka LAN IP), then you will get an error from the browser
Anyway…
Well if adguard can do nat hairpinning then nat hairpinning it is. But I somehow doubt that, but i might be wrong.
I don’t know where did you saw that owning a domain requires a public ip address resolved to that domain. I bought a domain, do not have public ip, nor this domain is resolvable as it doesnt have valid public ip. Furthermore, it doesnt have any valid ip as i don’t update provider ip.
I know what he asked. I asked that question myself few years ago. And yes you can access your local ha instance using https locally.
Are those kind of remarks really necessary?
They don’t add anything to the conversation.
Indeed, as the trustworthiness of the statements are not just questionable, they are proven wrong.
Examples:
And again to share something productive:
This is a working example with nginx as reverse proxy.
Configured with self signed certs. Working with Android, IOS, … and of course as well with Assist.
cat /etc/nginx/sites-enabled/ha
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
server_name ha.myfancydomain.local;
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
server_name ha.myfancydomain.local;
ssl_certificate /etc/nginx/ssl/2024/ha.crt;
ssl_certificate_key /etc/nginx/ssl/2024/ha.key;
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
listen [::]:443 ssl default_server ipv6only=off;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
ssl_protocols TLSv1.3 TLSv1.2;
ssl_ciphers 'TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://127.0.0.1:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
I will post my config when I came home, as my nginx is not accessible over net.
oke lets do this really quick as I have work to do.
ngnix config file
# ------------------------------------------------------------
# myserver.com
# ------------------------------------------------------------
server {
set $forward_scheme http;
set $server "ha_ip";
set $port 8123;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my_domain.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-5/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-5/privkey.pem;
# Block Exploits
include conf.d/include/block-exploits.conf;
# Force SSL
include conf.d/include/force-ssl.conf;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;
access_log /data/logs/proxy-host-1_access.log proxy;
error_log /data/logs/proxy-host-1_error.log warn;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
Its nginx docker container with idiot proof gui for configuration. I use dynu dns to get my certs from it. I just need to do it first time i’m getting cert then i will renew it automatically. I never update my ip address to it.
adguard config
rewrites:
- domain: '*.mysubdoamins.com'
answer: ha_ip_everything_is_in_docker
- domain: mydoamin.com
answer: ha_ip_everything_is_in_docker
and last but not least
configuration.yaml.abdulah
http:
use_x_forwarded_for: true
trusted_proxies:
- 172.18.0.0/16
and there you go. Got my cert via letsencrypt, told adguard to use my ip as resolver for that domain and that is it. Works for subdomains also. And there is your ssl in local network.
It’s simple, it works and everyone can do it by them own.
Have to go.
Please stop the personal attacks and concentrate on the technical challenges and questions or this topic will be locked.
As Daniel said above, there is many ways to achieve this… And a lot depends on your installation method and network setup…
Myself I am running Home Assistant Container, so no add-ons, and I have my own registered domain name, let’s say EXAMPLE.com
. I also do not currently expose HA or anything else to the Internet.
I wanted a wildcard certificate (*.EXAMPLE.com
) so that I could use it for more than just HA. With Lets-encrypt that means using the DNS challenge method to generate the certificate. Details here; Home-Automation/Lets Encrypt at main · Fraddles/Home-Automation · GitHub
Once I had my certificate I updated my HA config to use it. I applied HTTPs to the HA webserver directly without using any reverse proxy. Some info and working docker-compose.yaml here; Home-Automation/Home-Assistant at main · Fraddles/Home-Automation · GitHub
The certificate will only be accepted by your browser/device if the name you are using is one of the names (or IPs) embedded in the certificate… In my case I have a wildcard cert that will match any* URL ending in EXAMPLE.com
. For example home-assistant.EXAMPLE.com
, jellyfin.EXAMPLE.com
, etc…
I can access HA using any of the following;
https://IP-ADDRESS:8123
https://home-assistant.local:8123
https://home-assistant.INTERNALDOMAIN.home:8123
https://home-assistant.EXAMPLE.com:8123
The first thee of these will allow me to access Home Assistant, but will give me a certificate error, as none of those names are in the certificate. I need to use the last one for the certificate error to go away.
My local DNS provider (my router) does not, by default, know anything about EXAMPLE.com
, so queries for that domain are forwarded to my upstream (Internet) DNS… The upstream DNS is also not helpful here as I have not configured records with my internal IPs (or any at all actually).
How to resolve this depends on your network setup… This is where Adguard, PiHole, etc comes in. Myself I have a Ubiquiti USG and I can SSH into it and create a static DNS entry for any name I want with the following command;
set system static-host-mapping host-name home-assistant.EXAMPLE.com inet 192.168.0.xx
Which results in any local DNS requests matching the host-name will be resolved to the specified internal IP. I could also add a record to my external DNS (with my external IP), open port 8123 on my router and access it remotely using the same URL.
Hopefully some of the above is useful to you…
Cheers.
My set up is very similar to Chris. I can also access my domain remotely. I tried that using cloudflare and it worked.
This is what I wrote on deleted post without insulting some guy.
You need two addons. Adguard and nginx.
First you will create some free domain. I’m using dynu dns.
With nginx you can create proxy host and get cert for that domain. How to get ssl cert is explained on their site.
In nginx just type domain name you created and use scheme http.
For ip address use home assistant ip and for port default port 8123.
I can get ssl certs for domain in nginx as this container have built in support for letsencrypt but I don’t know will this work in other types of installations because it depend on containers they are using.
When you get your ssl cert there is only one thing left to do.
Go in adguard and use dns rewrite. For domain name use your domain and for ip address ip of home assistant. The same that you put in nginx.
And that is it. All certs that you created will be renewed automatically as nginx is taking care of that. When you set this once you can forget about it.
This is basically the same thing as Chris wrote just using two not so much different approaches. Mine is router independent as router doesnt have a clue what is going on.
And Tom I appreciated you and your work. I don’t have any complaints on you.
But some guys on this forum are obviously, at least from my perspective, doing everything to get reaction from me, to say it as polite as possible. I know i could react better.
Mine does…
That solves the naming part.
And in order to get rid of :8123 I set up Apache as reverse proxy (as I run Apache anyway).