Planning migration of a venv Core install into Docker (Ubuntu 18.04) - Questions

I’m probably not being very clear - long day.

The copy_file command works as is perfectly in my current venv setup. I’m concerned about building a more complicated version including the SSH required from within docker and making it work reliably as I’ve had issues with multi-quoted command lines in the past (sending SMS before the gsm modem integration).

I may have a try at replacing the copy file mechanism that feeds my local_file cameras and replace it with a folder_watcher as an alternative to SSH but will experiment as and when over the rest of the week.

No I just wanted to try to help you get it going if you didn’t want to come up with another solution. So I was probing to figure out for sure where the failure point was.

If not that’s fine too.

Just let me know if you need any more help. I may not be able to but I’ll give it my best. :slightly_smiling_face:

1 Like

Just a brief update on my progress for anyone watching and those who’ve offered assistance.

With the help of my test install, I established I could replace my copy_file shell command with a folder_watcher that updates my local_file cameras and associated picture_entity cards with the most recent image from each camera.

I’ve now moved my production system across to using the folder watcher method, whilst taking the opportunity to sort out the awful mess I’d made with implementing the media_source integration and move my camera images into that folder structure (instead of www/) which has the added bonus of requiring authentication to view the images.

The remaining command line sensors can be run as cron jobs so I’m now shell_command free, taking that configuration quirk out of the running.

This weekend will be mostly spent playing around with zwavejs / zwavejsmqtt docker on my test system to get my head around that side of things.

1 Like

So with my live HA prepped and fully operational without my troublesome command line sensors, I’ve turned my attention to checking out zwavejsmqtt on my basic test pi (zwave stick attached but no devices).

I went with zwavejsmqtt (for now at least) because I understand it is the only way to currently provide a control panel. Install via docker was fairly straightforward and I’ve linked it to my test HA with websocket (mqtt is disabled on zwavejsmqtt).

It seems like I’m set to move across once I have a suitable tinkering window, so I offer my migration plans and proposed docker setup in case I’m missing something or in case there are of use for others. Any final hints / tips welcome.

Current HA Core install is a venv run as a service under a dedicated ‘homeassistant’ user account, with configuration stored under /home/homeassistant/.homeassistant/.
Base OS is Ubuntu 18.04 on an Intel NUC. Configuration and .storage backups are run nightly.
Zwave devices are all Aeotec: 4 * ZW096 Smart Switch 6, 3 * ZWA005 TriSensor, 2 * ZWA008 Door/Window 7 (using the dry contacts).

Migration Steps

  1. Note current entity names by node from HA.
  2. Install Docker and Portainer. Grant access to docker for user accounts homeassistant and the normal user. Stop and disable HA Core venv service
sudo -i
apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt install docker-ce docker-ce-cli containerd.io
docker run hello-world
systemctl enable docker
sudo usermod -aG docker homeassistant
sudo usermod -aG docker <normaluser>
docker volume create portainer_data
docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
sudo systemctl stop home-assistant:homeassistant.service
sudo systemctl disable home-assistant:homeassistant.service
  1. Setup zwavejsmqtt docker running as homeassistant account, mapping to zwave device
sudo su -s /bin/bash homeassistant
docker run --name zwavejsmqtt -p 8091:8091 -p 3000:3000 --restart=unless-stopped -v /home/homeassistant/.zwavejs:/usr/src/app/store --device=/dev/ttyACM0:/dev/ttyACM0 zwavejs/zwavejs2mqtt:latest
  1. Configure zwavejsmqtt and check zwave nodes come up (waking battery devices as required)
  2. Setup HA Container in docker running as homeassistant account and mapping device for SMS
sudo su -s /bin/bash homeassistant
docker run --init -d --name homeassistant --restart=unless-stopped -v /etc/local/time:/etc/localtime:ro -v /home/homeassistant/.homeassistant/:/config --network=host --device /dev/sms:/dev/sms homeassistant/home-assistant:stable
  1. Remove deprecated zwave integration in HA, restart and add zwavejs integration. Rename entities as required.
  2. Adjust nightly backups to include the .zwavejs storage folder.
  3. Profit.

That all looks good except you might want to revisit how you send the device to zwavejs2mqtt

You’ve been running your zwave system in HA for a while so you would know whether it is an issue or not but it’s usually best to expose those devices via a method that won’t ever change because sometimes the device path might change during restarts or if the device is pulled and re-inserted.

You can use the “by-id” path or use a udev rule to make your device path persistent then use that persistent path in your docker run command. I do the latter in my production HA but the former in some test instances of HA.

Like I said it doesn’t look like you’ve been bitten by this since you are still using the ttyACM0 path but others might so I’m throwing it in for completeness in case others find this thread later.

I have been pondering that - I have my USB GSM modem set via a udev rule (something I learned about when setting up my original SMS via gammu command line approach pre-integration) so I can look into doing that at switchover.

Side question - Am I right in assuming that the z-wave stick isn’t expected to show as ‘secure’ in the zwavejsmqtt console and that the column is only relevant for added nodes?

I’m not 100% sure of that but mine is a secure stick and shows “no” under the secure column.

1 Like

So In reading this I realized it is just like an issue I ran into yesterday…I can’t figure out with the quotes running a command on the host from the docker container… I’m new to these extra steps to get out of the docker container but figured out a few more basic commands this way… it seems all of the weird quotes are messing it up.

The command directly on host:

echo "$(($(date +%s) - $(date --date="`systemctl show unifi.service -p ActiveEnterTimestamp | awk -F'=' '{print $2}'`" "+%s")))"

Returns time in seconds

When running from the HA Container to host

ssh -o StrictHostKeyChecking=no -i /config/ssh/id_rsa [email protected] 'echo "$(($(date +%s) - $(date --date="`systemctl show unifi.service -p ActiveEnterTimestamp | awk -F'=' '{print $2}'`" "+%s")))"'

I get:

awk: line 2: missing } near end of file
62197

Which is a little more than double what the seconds should be I believe… I tried a few different variations of quotes but they all seem to muck it up even more… But it sounded like in your comment maybe you had some familiarity with dealing with this stuff.

Here is one of my shell commands that works:

shutdown_server_pc: ssh -l <user> 192.168.1.11 "net rpc shutdown -f -t 60 -C 'Home Assistant has called for shutdown in 60 seconds' -U server%<password> -I 192.168.1.15"

I was all set to move across tomorrow, however the 2021.3.x release means that I’ll have to rework my template fans (in a way that I’ve yet to understand how to do).

I don’t want this to delay my move across.

With that in mind (and for my notes for future maintenance) is there an elegant way to use the last version temporarily while I figure out what configuration changes are required for the fans.

Looking at docker.hub I can see docker pull homeassistant/home-assistant:2021.2.3 will obtain the version I’m currently running in a venv.

Presumably, I should simply change my planned docker command:

docker run --init -d --name homeassistant --restart=unless-stopped /
-v /etc/local/time:/etc/localtime:ro -v /home/homeassistant/.homeassistant/:/config /
--network=host --device /dev/sms:/dev/sms homeassistant/home-assistant:stable

to

docker run --init -d --name homeassistant --restart=unless-stopped /
-v /etc/local/time:/etc/localtime:ro -v /home/homeassistant/.homeassistant/:/config /
--network=host --device /dev/sms:/dev/sms homeassistant/home-assistant:2021.2.3

Or is there a more elegant way to handle this (ideally with Portainer) that I’m not seeing due to lack of experience?

I’m now at the ‘Profit’ stage of my migration plan, running the latest release with some suitably adjusted template fans.
All my zwave devices moved across successfully and are tested and working, as are all my integrations. I have a couple of UI elements to adjust - I used to show z-wave device states (ready, initializing, etc) with the old integration - but everything works.

I wound up mapping my z-wave stick with a udev rule, and using the standard host user account rather than my homeassistant service account to run the containers as originally planned. I also added a self-signed certificate to my portainer instance.

Thanks to @finity and @Burningstone and alll who contributed to the discussion with valuable points.

For anyone else in a similar boat, I thoroughly recommend setting up a basic test instance if you can - my test instance gave me valuable insight into several configuration items in my system that would have been nasty surprises if I’d attempted the move cold.

3 Likes

:+1:

And using Docker makes that dead-easy. :slightly_smiling_face: