I am coming back to HomeAssitant after a number of years and just installed the HomeAssistant core in a docker container. I previously ran HA on a raspberry PI 2 and it is still running today, but I would like to upgrade and move to docker on a ubuntu PC.
I installed the core image for Linux, but I can find no way to install SSH access or nodered, both of which are essential for me. I can see references to Supervised installation, but also see that this is no longer supported.
What are my options?
If so then you have full access to the OS so you can install SSH, Nodered or any other external software you want just like you would on any other Linux distro.
Thanks for the reply. I used a HA container, but I am not very familiar with containers. So I should be able to access the OS on the container and install nodered? I assume I can then integrate it with HA as I did previously.
I am curious as to why HA has moved in this direction. Its seems very restrictive now out of the box.
Thank you I had seen those links for the supervised version but the install script fails to complete and it looks like I will have to debug it. I also saw some commentary to say that the supervised install will no longer be supported.
Is this the only option for those that want more flexibility for integrations?
The container only contains the necessary code to run HA. You will never modify the contents of the container. And even if you did it would get wiped out on the next update of the HA container image.
You will simply go to the base OS (Linux in your case) and install SSH and Nodered there.
But…
I would learn more about containers if that is the way you are wanting to go. I use Docker for almost everything on my system. It’s not too hard to learn.
And then I would install Nodered in another container instead of directly on the OS. (you would still install SSH directly on the OS tho).
Or if you really don’t want to learn Docker then just either install HA Supervised (but you will still likely need to figure out Docker at some point even then) or use HA OS.
Either of those two options hide most (HA Supervised) if not all (HA OS) of the backend operating system from the user.
HA OS really limits your access to the OS tho. So if you plan on running others things outside of the HA ecosystem then go with Supervised. Or better yet, lean Docker and stay with HA Container like you have now.
I ended up installing the HA OS in a VM on virtualbox on Linux which gave me the supervisor option. To be honest without that option HA is like a chocolate firegaurd.
I have mQTT and nodered integrated now, but I will learn about containers and docker because I hear great things about it. I appreciate all of your replies, thank you!
I had to chuckle about “chocolate firegaurd” and although I can understand the sentiment behind this, like @finity I run the container version and disagree as well. I started out with Home Assistant container and it’s essential to link everything I have running in other containers (ie zwavejs2mqtt, Zigbee2mqtt, and node red) together. Instead of addons managed by a supervisor you manage them yourself. Although more difficult to startout this gives you a lot more control over the system and let’s you install whatever else you want on the same machine… The supervised home assistant manages everything for you but I’ve seen a lot of posts lately where people have run into trouble installing things or mapping a usb device, ie No UDEV Rules? or running a simple Linux command Cloud component and Smartthings - #33 by mwav3. One wouldn’t have these issues if they ran the container version. The supervised install is locked down and when things go wrong it becomes a nightmare of trying to find files, figure out workarounds to install commands that aren’t included, or workaround the supervisor to get things running. Also, docker is so light weight that this is much more efficient then a VM sucking up resources all the time, and I’ve seen a lot of posts complaining about high cpu usage, random restarts, and overall stability issues. Home Assistant just stops working! Here’s an article explaining the benefits of containers vs vm’s
With everything I have running cpu usage barely hits 12%. Just to see if I was “missing” anything I tried out a supervised install through VirtualBox on a windows machine. It takes forever to startup, and cpu usage was so high the fan was on full blast and could barely keep up. After all that i dont see any addons I would use that I can’t install myself in their own container. I’d much rather control the os and other containers myself while home assistant runs happily on its own in docker.
Thank you and @finity for the reply. I guess I need to more fully understand how the container would work. When I originally installed Home Assistant a few years ago I simply installed it onto a lubuntu or Rasbian OS and then separately installed node-red and Mosquitto and every other service I needed. I then would just SSH into the PI and do all of the integrating I needed. I guess the chocolate fireguard sentiment is more a reflection of not being able to integrate with other services like node-red. I assumed the containerised version of the core prevented that sort of flexibility, but this is probably more to do with my lack of appreciation of containers. So, I will take up @finity’s suggestion and research their application and then look at how I could integrate the core like you and @finity have done.
They changed how you can run home assistant now for a “supported” install and you really can’t do it this way anymore. You have to install on debian, but I would not recommend this route. Way too much to go wrong. The documentation warns you against it as well Linux - Home Assistant
I’m not necessarily suggesting that you go to a HA Container install. If the current supervised install works for you then that’s good for you.
I originally ran a venv install “back in the day” and finally decided to make the jump to Docker a couple of years ago. I’ve not looked back since.
Docker makes things much easier as far as maintenance goes. And it gives me the flexibility to run
other things on the OS if necessary (I run a Kodi media server right now on the same machine).
And I don’t need to deal with the finickiness of the Supervisor either (unhealthy, unsupported, failed updates with no warning an update was even going to happen, etc).
All kidding aside though, it’s nice there are multiple install options for Home Assistant which makes it a very customizable program. At the end of the day, you have to set it up whatever way works best for you though.
Thanks again, guys. Reading that guide fills in a few blanks on how to integrate if I did go the containerized route. I appreciate all of your feedback, very helpful.
New to the community (first post), but not new to docker or HA. Sorry to revive a dead topic, but thought it might be useful to OP or others viewing. I basically expanded on this excellent guide and added Zigbee support.
With the following docker-compose.yml file, I was able to get NR and HA to play nice with one another. The important parts here are:
Set the network mode to host for both. HA needs to use the native VM’s (or server’s) network in order to see other devices on the network, lest you want to do some really painful networking.
Reference config volumes for both that are persistent. In my case, they were both directories on my NAS that the host docker VM is able to access.
# Works as of 2022-08-20
version: '3'
services:
homeassistant:
container_name: homeassistant
devices:
- /dev/ttyACM0 # This is a reference to the Conbee II Zigbee hub
# that I mapped to this VM from Proxmox. Omit if you
# don't have it. You don't need DeConZ anymore.
# Your device ID may be different.
environment:
- TZ=America/New_York # Replace with your timezone in the same format.
# This is the GitHub package that they recommend using in HA's Getting
# Started page, but the one on Docker seems to be updated in parallel.
image: "ghcr.io/home-assistant/home-assistant:stable"
network_mode: host # CRITICAL!
privileged: true
restart: unless-stopped
volumes:
# Replace the first part of this with the actual path
# to your persistent config.
- /path/to/your/persistent/ha/config:/config
- /etc/localtime:/etc/localtime:ro
node-red:
container_name: "node-red"
# Only include this 'build' line if you're building your own NR image
# (e.g. with add-on flows) as per Jordan Rounds' guide. Mine is in a
# subdirectory called "node_red" but you can place the Dockerfile in the
# same dir if you want. Just replace './node_red' with '.'.
build: ./node_red
environment:
- TZ=America/New_York # Replace with your timezone in the same format.
network_mode: host # CRITICAL!
image: "nodered/node-red" # Optionial if you're building.
restart: unless-stopped
ports:
- "1880:1880"
volumes:
# Replace the first part of this with the actual path
# to your persistent config.
- /path/to/your/persistent/ha/config:/data
Hope this helps someone.
JD
EDIT: Forgot to mention updating.
Updating both is a cinch, and so long as your config is stored outside of docker, it shouldn’t break anything (regular backups are still recommended).
This is the script I use to update my images:
cd /path/to/your/docker-compose.yml # change as appropriate
sudo docker-compose pull # updates the images / builds as necessary
sudo docker-compose up -d --remove-orphans # deploys the updates in the background, removes the old ones
sudo docker image prune -f # deletes old images to save space