Docker and SSL Configuration

For some reason I have the same issue. I have a script that renews the cert and copies the files in the ha folder

Would you mind posting a copy of the script that you use?

LetsEncrypt recommends that the files aren’t moved but I guess copying them into the HA folder could be OK.

Thanks for your help.

Sorry not in front of my pc now. Will try later
It’s basically 2 python scripts. One checks how many days left on the cert. It runs on a crontab one a day and calls the second one when I have less than 30 days left.the second one renews the certs, copies the files and restarts the docker container

SSL Status python script:

#!/usr/bin/python
# -*- coding: utf-8 -*-

import requests, subprocess
import datetime
import time
import sys

MQTT_Host = "192.168.0.24"
MQTT_Port = "1883"
MQTT_User = "username"
MQTT_Password = "password"

try:
	p = subprocess.check_output(["ssl-cert-check", "-b", "-c", "/etc/letsencrypt/live/your_domain_here/cert.pem"], shell=False)
	#print"%s" % p.split()[-1]
	if (p.split()[-1] < 29):
		p = subprocess.check_output(["/home/cctv/Scripts/RenewSSLCert.sh"], shell=False)
		p = subprocess.check_output(["ssl-cert-check", "-b", "-c", "/etc/letsencrypt/live/your_domain_here/cert.pem"], shell=False)
	MQTT_Command = ["mosquitto_pub", "-h", MQTT_Host, "-p", "1883", "-u", MQTT_User, "-P", MQTT_Password, "-t", "Sensors/SSLDays", "-r", "-m", p.split()[-1]]
	ps = subprocess.Popen(MQTT_Command)
except:
	print ("Unable to get SSL Status (%s)" % datetime.datetime.now().strftime("%d/%m/%y %H:%M"))

renew SSL bash script:

sudo  ~/certbot/certbot-auto renew --quiet --no-self-upgrade --standalone --preferred-challenges tls-sni-01 --tls-sni-01-port 8123 --pre-hook "docker stop hass" --post-hook "cp /etc/letsencrypt/live/your_domain_here/fullchain.pem ~/docker_ha/shell_scripts/fullchain.pem && cp /etc/letsencrypt/live/your_domain_here/privkey.pem ~/docker_ha/shell_scripts/privkey.pem && docker restart hass"

Thanks! I’ll give them a try.

hi @Texangeek - did you get a resolution on this? I have the same problem. Im running raspian debian in my case - pi3. For now, i need to hash-comment-out this part of my config file in order to get the HA webserver to even start up again on port 8123 (in http:// mode only):

[snip]
http:
api_password: XXXXXXX
ssl_certificate: ‘/KSSL/fullchain.pem’
ssl_key: ‘/KSSL/privkey.pem’

Ive used the -v switch when spinning up my container to create the /KSSL/ mound point, and its pointed to my full /etc/letsencrypt/live/XXXXX/ directory.

But im getting the same error as you:

2019-03-10 19:06:15 ERROR (MainThread) [homeassistant.config] Invalid config for [http]: not a file for dictionary value @ data[‘http’][‘ssl_certificate’]. Got ‘/KSSL/fullchain.pem’
not a file for dictionary value @ data[‘http’][‘ssl_key’]. Got ‘/KSSL/privkey.pem’. (See /config/configuration.yaml, line 46). Please check the docs at HTTP - Home Assistant
2019-03-10 19:06:15 ERROR (MainThread) [homeassistant.setup] Setup failed for http: Invalid config.

Shout back if you got a resolution, as im kinda stuck!

Not sure if this is the answer that you’re after but ever since I moved my ssl stuff to the let’s encrypt docker and used the built in nginx with it I no longer have ssl related errors in ha

thanks @lolouk44 - hmmm interesting. Whats the architecture of what you propose? Do you have a link to a howto or something?

this is what I followed:

1 Like

I had the same issue. I tried the Hass.io NGINX SSl Proxy and it just worked.

The only addition is that I added the domain name into my Pi-hole /etc/hosts so I knew an nslookup returned the right IP.

I’m going to have to work out a means of updating the certificates automatically.

that’s the beauty of the letsencrypt docker container, it takes care of the certs renewal automatically by itself.

BUT (AFAIK) it does not do DNS-01 challenge for a wildcard certificate so is useless in my circumstance. The beauty of a wildcard certificate is:

  1. I do not need an externally facing website for an internal domain name. The internal sub domain names are completely invisible externally.
  2. I get just one certificate for all the internal machines.

I won’t pretend to know everything there is to know about certificates, I’m still relatively new to this so bear it in mind when I ask:
If it’s all internal and not exposed to the Internet why do you need https encryption?

Yes it is a bit over the top you’d think. Firstly more and more browsers make non https browsing difficult. Secondly, Troy Hunt thinks it is a good idea and he know more about this than I do :smile:.

I have discovered that I can apt-get install python3-certbot-dns-cloudflare which is easy to do. I think will work on individual machines, and using a DNS-01 challenge does not require the FQDN to be previously defined. You can point the certificates to the right folder so Hass.io will just pick them up (needs testing but is a fantastic solution).

went down the rabbit hole on this stuff this weekend!

Wanted to learn kubernetes, so ive spun up a cluster with the end goal being a Home Assistant container (amongst others), orchestrated by kubernetes. After that i want to spin up Envoy as the ‘sidecar proxy’ fronting Home Assistant with Istio orchestrating the proxy (why? see https://kubedex.com/istio-vs-linkerd-vs-linkerd2-vs-consul/ … these are going to be big in my industry, so wanted to learn them at home). Envoy is able to SSL reverse proxy. so will try for that for the HA setup.

Good in theory…!
(im currently stuck how to present the /config HA directory to kube and have asked for help here - Kubernetes Helm Chart )…

Ill update here when i get it all working.

For anyone interested i think i hit a roadblock here thats probably a showstopper for me running HA under Kube. The problem is access to /dev/ttyXXX usb zwave stick doesnt seem to be possible without a ‘device package’. And id be surprised if anyone could be botherered writing one for this. I suspect at least for now im part of a very small lit of people wanting to run HA under kube.

The files in “live” directory are symbolic links. You should mount the directory where actual files are in your docker container, otherwise your container can not access those files because they are not mounted.

I just got my docker working in unraid with SSL.
The thing to figure out is where the SSL directory is.

In my case I put my SSL directory in the root of my Home-Assistant folder:

/media/appdata/home-assistant/ssl

In my configuration.yaml this was then:

ssl_certificate: /config/ssl/fullchain.pem
ssl_key: /config/ssl/privkey.pem

This got me completely stumped with my “supervised” installation so I thought I’d add a comment to help the next person.

It would seem that the ssl_certificate and ssl_key paths are, despite the use of a leading ‘/,’ are always relative.

I had to put the keys into /usr/share/hassio/ssl on the file system, but they are defined in /usr/share/hassio/homeassistant/configuration.yaml like this:

http:
  ssl_certificate: /ssl/fullchain.pem
  ssl_key: /ssl/privkey.pem

I guess it’s something to do with the docker configuration.

Symbolic links are fine.

The reason is that /usr/share/hassio is the ‘root’ in the container and you have /config /backup /share /ssl /addons /media under that so of course /ssl/fullchain.pem etc for the ssl keys. It’s not the debian root it’s the container root you are referencing to.