Planning migration of a venv Core install into Docker (Ubuntu 18.04) - Questions

For the past three years I’ve been running HA core in a venv on Ubuntu Server on an Intel NUC without any issues. It’s run as a service under a non-root service account. I have versioned flat-file backups kept in two separate locations.

With the introduction of ZWaveJS, which I understand I’ll need to run in Docker / native I’m planning to move my existing install across to the docker container once I’ve migrated my relatively small zwave setup across.

The plan currently is to:

  1. Install docker
  2. Install portainer as a docker container (for oversight)
  3. Install ZWaveJSMQTT as a docker container with MQTT disabled and migrate zwave
  4. Install HA Core as a docker container and migrate HA environment.
    My intention is to use docker compose files to specify the containers for easy backups.

My setup isn’t overly complex, but I have the following elements that I believe will require attention to complete the move:

  1. command line sensors
  2. Aeotec Zwave USB controller - will require pass-through from ZwaveJSMQTT container
  3. USB GSM modem (SMS integration) - will require pass-through from HA container

Questions

  1. Is there anyone out there who can give the benefit of their experience doing something similar?
  2. Can I keep using the service account to run the containers (it has appropriate dialup permissions for usb access etc.)
  3. Will I be able to run the command line sensors from within the docker containers?
  4. Is there anything I’m missing / overlooking with the above?

TIA.

You don’t need to run it in docker,see the advanced install instructions here.

I had read something about a native install and a quick skim of those instructions confirms that there are more advanced options that don’t require Docker - thank you.

However I’m not sure that I want to add an additional layer of complication to my home setup at this point. When I first read about zwavejs in Docker it seemed like an opportunity to force myself to simplify my update / maintenance process by migrating to Docker, using zwavejs as a a toe in the water.
As I understand it, running with a docker setup should simplify any future hardware swaps (assuming reliable backups of course) - which I can see being a benefit in the future.
Running i

Docker is an additional layer of complexity. Personally I’m considering switching back to a virtual environment due to this additional docker layer. I’m very familiar with docker and I’m running the Home Assistant docker stack for a few years now, but it just feels like an unneded layer.

I might need to rethink then - I had a bad time with docker on Windows for work (prototyping something that we abandoned thank goodness). That’s the limit of my docker experience to date.

Depends. It takes 1 command to install Docker and another command to run HA Container in Docker. Compare this to the steps required to create a cozy Python vENV …

I never heard good things about docker in windows xD

It works well with Windows 10 and the WSL2 sub-system (as long as you don’t need bind-mounted volumes). It’s still not a good thing to use Windows for a 24/7 system …

But - we’re off-topic.

It’s maybe one or two commands more and for someone familiar with python and venv like OP seems to be it is probably something he can type in his dreams.

I was not talking about the complexity to install HA and docker, I was talking about the complexity that docker adds. Best example is the additional network layer or access to the host, folder permissions etx.

It’s sounding rather like I should stop dithering, pull a spare Pi3b out of storage and install a test setup of HA and ZWaveJS in docker - I think I have a spare USB zwave controller somewhere.

I tend to learn best by doing so this way I can at least get a sense of what’s involved with managing docker etc. without the pain of fiddling with my live system.

1 Like

I did that a couple of years ago so I’ll be glad to help.

And I would never go back to the venv install unless I just had no other choice. Docker makes maintenance, updates and testing SO much easier.

your current plan to switch looks reasonable. The only thing I recommend is to not use docker compose. There is no need to use it and it could complicate updates later. Just use a docker run command for each container instead. that way you can update each individual container as you have time without needing to update every container all at once.

You can still specify volumes to containers either way.

that will take a bit of finagling to get those to work. see here for the procedure I used to make them work.

Doesn’t require a pass thru to the HA container since zwave2mqtt is handling the zwave network all by iteself.

That is likely true but it’s not hard.

I assume so. I kept my original user as the controlling account for docker stuff. I need to run things using sudo but that’s OK with me and I never bothered to change it.

Here is my HA Docker run command for reference. It passes my Nortek zigbee stick and Aeotec zwave USB stick (which I set to permanent paths using a udev rule) into the container (because I haven’t updated to zwavejs yet) so that gives you an example of how to do it with you GSM:

sudo docker run -d --name="home-assistant-prod" --restart=unless-stopped -v /home/finity/docker/hass-config:/config -v /etc/localtime:/etc/localtime:ro -v /home/finity/docker/sshkey/.ssh:/root/.ssh -v /var/run/docker.sock:/var/run/docker.sock --device /dev/zigbee:/dev/zigbee --device /dev/ttyUSB-ZStick-5G:/dev/ttyUSB-ZStick-5G  --net=host --cap-add SYS_PTRACE homeassistant/home-assistant

If you have any other questions feel free to ask.

1 Like

I use docker-compose and can update individual containers just fine.

docker-compose pull container_xy
docker-compose up -d container_xy

I personally think it’s way easier to maintain a single docker-compose file than 20 individual docker run commands, but to each their own.

1 Like

I’ll undoubtedly have some - this will probably take me some time to plan out and do in an elegant fashion, particularly after this morning’s HA update debacle where my Frankenstein’s monster z-wave doorbell kept chiming. My wife was unimpressed after she’d answered the door twice to find no-one there :smile:

I’ve got some time on Saturday morning so think I’ll spin up a clean OS on the sacrificial Pi and start trying to get my head around the docker side of things.
Once I’m happy I can manage docker properly then I can progress.

I may have misunderstood, but my understanding was that zwavejsmqtt was the zwavejs server to go for at present because of the configuration and control capabilities? My plan is to connect via websocket rather than using MQTT but I need to ensure:

  1. Device support for Aeotec TriSensors and Door / Window 7 sensors (which I believe is imminent / done but haven’t checked recently)
  2. Ability to set configuration parameters (Operation Mode on the D/W 7s as I’m using their binary contacts)

I’m planning changing zwave on the assumption that my migration will fail and I’ll need to setup from scratch - probably won’t be the case, but I’m a “glass totally empty and smashed on the floor” kinda guy.

well since I don’t use docker compose I never knew you could do that. :laughing:

But either way you still have to run a command in a terminal.

I just find it easier in my mind to keep things separated.

And if you use Portainer it pretty much becomes moot anyway since I use that almost exclusively to update containers.

No. you understood that correctly.

My point was that with the new zwave js (no matter how you run the server side) you won’t need to pass the Aeotec stick in to HA any longer.

You just need to pass it to the zwavejs2mqtt container. Which is done in exactly the same way as I showed above for the HA command.

But for reference here is my zwavejs2mqtt docker command:

sudo docker run -itd --name=zwavejs2mqtt --restart unless-stopped  -p 8091:8091 -p 3000:3000 --device=/dev/ttyUSB-ZStick-5G  -v /home/finity/docker/zwavejs2mqtt/store:/usr/src/app/store  zwavejs/zwavejs2mqtt

the pairing is stored on the stick itself so there shouldn’t be any need to re-pair everything from scratch.

I see people doing that all the time but the vast majority of the time it isn’t needed and is waste of time.

1.) Disable firewall and use network=host. Docker will automatically open the firewall ports but not if you are using network=host
2.) Enable the docker API for portainer access
3.) As far as the accounts, HA Core wants to run as root,so don’t add UID docker options. I think the other containers can be run under other UIDs as long as they don’t to share resources.

So far I’ve managed to do the following:

  1. Setup a basic HA Core in venv on my test Pi, replicating key elements of my live setup:
    • running under a ‘homeassistant’ service account which holds the configuration under /home
    • with (deprecated) z-wave integration (controller only)
    • with shell_commands matching those in my live system
  2. Install docker and portainer, allowing docker to be run by the homeassistant account
  3. Get HA Container running / via docker (the below was run under the homeassistant account):
docker run --init -d --name homeassistant \
    --restart=unless-stopped \
    -v /etc/local/time:/etc/localtime:ro \
    -v /home/homeassistant/.homeassistant/:/config \
    --network=host \
    --device /dev/ttyACM0:/dev/ttyACM0 
    \homeassistant/home-assistant:stable
  1. Validate HA running correctly (having stopped the venv version service)

NB: I’ve changed the plan as I still have a lot of reading to do on z-wavejs options

Command lines are:

shell_command:
  purge_snapshot_images: '/usr/bin/find /home/homeassistant/.homeassistant/www/snapshot/ -mindepth 1 -mtime +1 -type f -name "*.jpg" -delete'
  purge_snapshot_video: '/usr/bin/find /home/homeassistant/.homeassistant/www/snapshot/  -mindepth 1 -mtime +1 -type f -name "*.mp4" -delete'
  copy_file: 'cp {{ src_file }} {{ dst_file }}'

As expected, my command lines fail from within the docker container.

Error running command: `/usr/bin/find /home/homeassistant/.homeassistant/www/snapshot/ -mindepth 1 -mtime +1 -type f -name "*.jpg" -delete`, return code: 1

@finity I’ve tried to understand the method of resolving this you kindly contributed to the discussion however I can’t see how this will let me use the copy_file shell command.

My other two commands I can easily replicate as cron jobs as they are simply cleaning up old notification images (in live the -mtime is 13, but reduced for testing), however the parameterised file copy is a key component in any of my notification automations that send camera images.

EDIT: I should probably give the reasoning behind the copy_file command in case there’s a better way of achieving it’s purpose.
My security camera automations take camera snapshots with a generic file name, e.g. garden_last_motion.jpg, used as source for a local_file camera.
This file is then copied to a uniquely named file, e.g. garden_last_motion_YYYYMMDDHHMMSS.jpg, used as the notification attachment.
This approach lets me have a dashboard tab displaying the last motion image for each camera whilst images attached to mobile notifications are kept unique in case of multiple triggers.

What is it you don’t understand?

normally docker containers don’t have access to run commands on the host.

If you set up a ssh key then that allows that limitation to be bypassed.

you will generate a key within the docker container and then copy that to the host system.

I’ve had issues in the past with quoting in parameterised shell commands like my copy_file command, and actually wound up getting rid of the command to get round them as the issues seem to alter slightly on a release to release basis.
I guess from my venv perspective it just seems another hoop to jump through - I’ll have to mess around with it when I get a chance or maybe see if I can remove the copy_file command from the equation…
It seems like it will introduce some

Can you run that command (without the outer set of single quotes) from the host directly?