Thanks I share your views, with the added exception that HA controls my house.
In that setup, the critical component is my wife who doesn’t share my enthusiasm for fixing things, especially during down times, and she wants it to work all the time.
I therefore have to try and set some structure around what I do to keep HA downtimes to a minimum so that I don’t p!ss her off
Docker will give me the ability to build test containers allowing me to reduce the downtime. In addition I don’t need to worry about environment restrictions / dependencies as it’s taken care of by the container.
Don’t take me wrong though, I’ll still be tingling about with every other thing that I’m doing on that machine that is not as “critical” so I won’t stop learning…
Oh and about the open source bit, that might actually push me to finally build the component that I actually need to read the data off my energy monitor, and have it available to others, instead of running it off a personalised python code…
My comments were not directed at you or anyone in particular. I truly don’t care why you or anyone chooses to use Docker or not. Just think about why and don’t just blindly follow the crowd.
So I got my NUC yesterday and was able to get it up and running pretty quickly.
I made it a little more complicated than necessary. I first installed Debian, and then converted that to the Proxmox virtualization environment using these instructions. So the NUC is running Proxmox as the host OS.
It works great. Addons installed from within HassIO show up as additional Docker containers so you can run those, or you can a standard container. I just copied my HA and Node-Red config files over and was up and running within a few minutes. Using Portainer to manage, you can see the HassIO addons alongside my other containers.
I am hoping that keeping Hass/Let’s Encrypt/Node-Red/MQTT in the Hass ecosystem will simplify setting up SSL and all that. We shall see because that is this evening’s project.
I started my move to docker with HASS, and found that so straightforward I’ve worked my way through pretty much everything I’ve got running (unifi, sabnzbd, grafana etc. etc.)
Ok each container needs a bit of thought about where to store configs, ports to open, but once working it’s fantastic.
Also nothing is stopping you from creating your own images using other maintained ones as their base.
I might look at Docker Swarm next. Does that mean I can run more than one HASS container and so if one craps out I wouldn’t notice? Only my HASS install just hangs at random (including before I moved to docker) and I just don’t know why, so I’ve got a script to restart the container if I can’t connect cronned for every 5 minutes. Would be cool if Swarm made things more reliable.
So, If I have setup ssl using letsencrypt at Ubuntu system level for non-docker HA, I can move HA to docker with --network=“host” option without any specialy config/changes and still it be good?
Thanks for the info.
You’ll need to bind the directories storing your certificates on the host into the container so hass can see them.
I did it differently. I put nginx in another container and run that as a reverse proxy in front of hass. That way hass doesn’t care about ssl and you can access it on your local network via IP without ssl. Then my ssl is only used for access from outside the house.
Any more details on how to do this? What docker container to use? What else? I don’t like accessing HA using SSL in-side my network which makes DNS call and gets public IP and comes back. PiHole shows many request my HA making. Further more I have few other components which uses HA RestAPI to update it’s sensors which all has to access it via SSL as of now.
Appreciate it.
Thanks
Can you share an example of how this could be done. I’m using docker compose, and I have a command that I want to run for LIRC, and I’m having major issues getting keypairs working.
I took a simple approach on this - created a directory on the docker server with shell scripts in, pulled from source control, then I bind mount that directory into the HA docker on startup.
I ended up running all my scripts outside of the docker and communicate with HA via MQTT.
One of the main reasons was that there were several python libraries that are not in the HA docker and I did not want to mess about with it.
In addition for some reason some of my python scripts run on 2.7 and don’t work on python3 and back then I did not have the time to update the code to make it python 3 compatible…