Work in progress: configuration for running a Home Assistant in containers with systemd and podman on Fedora IoT

Continuing the discussion from Migrate from HA core to HA operating system (or container image) running in a systemd-nspawn container?:

I’ve been working on my setup for a bit and am pretty happy with it. I intend to write up a whole thing when its ready, but figured I might as well share some work-in-progress which might be helpful for others in the meantime.

Disclaimer: my day job gives me a vested interested in running this on Fedora Linux, but this isn’t an advertisement. I actually started the project to make sure I have some hands-on experience. But I think it’s a great setup and I’m excited to share it with others as well.

I’m also happy for your suggestions for doing things better — I’m new to the Home Assistant scene!

My current setup is running on Fedora Workstation on my desktop, while I hack at it, My eventual plan is to move to Fedora IoT on a Jetson Nano, but I’m still waiting on some bootloader bits to come together before that’s ready.

Computer setup

  • As mentioned, running this on a Fedora Workstation system.
    • NOTE: I set this up on Fedora Linux 35 with Podman 3.x. I’m told that there may be some issues with the default networking in Podman 4.x in Fedora Linux 36. Figuring that out is on my list. :slight_smile:
  • Right now, I’m running it all out of my own user account; I intend to move that to using a dedicated hass account. An argument could be made for running each of these services in their own account for further isolation.
  • I’m using systemd and podman to launch and manage the containers.
    • I’m using systemd instead of openshift or another kubernetes system because k8s seems like overkill for the situation, and because I’d have to figure out something with the USB devices.
    • Podman is generally a drop-in replacement for the more-famous Docker container tooling, but has some nice architectural advantages. It doesn’t require a daemon, and pioneered the idea of running containers as non-root.
    • SELinux wraps the containers in further restriction, so a lot of theoretical container-break-out exploits are mitigated without even knowing what they might be.
  • While I’d love to have all of the software packaged and integrated natively in Fedora and run Fedora-based containers, most of the software has actively-maintained upstream container images, and I’m using those directly. (I would like to have this all neatly tied up as an “out-of-the-box” thing for Fedora IoT, and for that we probably will want to go through that. Let me know if you’re interested in working on that idea!)
  • You’ll also need to add the container user to the dialout group: sudo gpasswd -a hass dialout.
    • an alternative to this might be a udev rule which sets the permissions to the container user directly. TODO item to investigate that.

Network

  • Everything is running on my home private network. Some stuff is on a dedicated IoT subnet, but a lot of devices are annoying about that (“cast” devices, for example, want to be on the same network as people’s phones and computers), so… I’ll have to figure that out.
  • I’m using my own domain name, of course, but for the purposes of this demo I’m using home.example.org.
  • My system’s firewall configuration blocks high-port connections except from localhost. Right now, several of the containers are exposed with --network=host, but this keeps them blocked off.
    • Ngnix is running as a reverse proxy, providing access at https://smart.home.example.org.
    • So that the Ngnix container can run as non-root, it’s actually bound to 8443 and 8080. The firewall forwards 443 and 80 to those ports.
    • To access the zwavejs2mqtt and zigbee2mqtt interfaces right now, I’m temporarily dropping the firewall. I intend to eventually set them up with their own names like https://zigbee.home.example.org (because I’m finding trying to convince them to live at paths like https://smart.home.example.org/zigbee is fragile). TODO. :slight_smile:

Controllers and protocol-specific software

  • I have a mix of Z-Wave and Zigbee devices in my house, so I’m using both. I started with the Nortek HUSZBZ-1 both-in-one stick, but migrated to separate devices. No real need — that was actually working fine — but that’s not made anymore and uses older generations of both technologies. So:
  • For Zigbee:
    • I’m using the TubesZB USB coordinator. I started with a Sonoff controller, but decided to make that into an additional router to help with some sensors I’m adding outside. (Future work!). And I love the idea of small-scale devices produced with love by fellow hobbyists.
    • I’m using zigbee2mqtt. I wanted to stick with ZHA for less complexity, but ZHA and Ikea Tradfri currently do not play nice so I bit the bullet. I have some other ideas for which MQTT will be helpful, so … probably worth it anyway.
  • For Z-Wave:
    • I have the Zooz ZST10-700 stick. I updated to the latest firmware before I started with it, and haven’t noticed any of the trouble people have reported with the 700-series controllers, FWIW.
    • And using zwavejs2mqtt.
  • I was also using Homebridge, but as of today, I’ve switched to the new HA integration from hopefully will have a solution soon — which is nicer in several ways already anyway. If you need homebridge or any other extra service, they can basically be added the same as the zwave and zigbee containers.

Systemd and containers

As mentioned, the containers are all managed by systemd and run in user sessions. The various unit files described in sections below go in ~/.config/systemd/user in the relevant user’s home directory. (Podman doesn’t currently work non-root containers in the system systemd config.)

You manipulate these with systemctl --user. For example:

 systemctl --user start container-homeassistant

You can also see their combined output with journalctl --user. This is nice because it ends up putting all of their logs in one place. Note that if you make changes to the config files, you’ll need to do systemctl --user reload before they take effect.

Important: to keep systemd from stopping all of these containers when you log out, you need to run

loginctl enable-linger hass

… where hass is the user under which the containers run.

Podman Autoupdate

All of the containers are given a label which tells podman autoupdate that they should be brought to the latest container versions. There’s a timer unit which makes this happen. Run:

 systemctl --user enable --now podman-auto-update.timer

to turn this timer on for your user session. (Note that enabling this system-wide will not affect “rootless” containers owned by a user, as this setup uses.)

Directory structure

I’m putting everything under /srv/hass. Basically all local config goes there, with a subdirectory for each service. I also have the systemd unit files linked there, in /srv/hass/systemd (so I can edit them without thinking about what particular home directory they’re in), as well as a /srv/hass/secret directory which holds API keys and such.

Certbot and Let’s Encrypt

I’m running on my home network, and didn’t want to expose systems this directly to the Internet. But I also wanted to use SSL, and a) the Android app won’t work with a self-signed cert and b) it’s better practice to use Let’s Encrypt when possible anyway. Most advice involves setting up dynamic DNS in some way, and making a network ingress path. But, there’s another way: the Let’s Encrypt “DNS challenge”. For this, you need a DNS provider that supports updating records via an API. Fortunately, Digital Ocean provides this — and it’s actually one of their free services. (Digital Ocean is awesome!).

But… Digital Ocean isn’t a domain registrar, and I actually already have most of my systems already all set up using DNS from my registrar. (Pair Domains, by the way. They’re awesome but kind of old-school in their approach, so they don’t have these nifty “cloud” features.) I was thinking I’d have to move everything to Digital Ocean, which would have been a chore — and also a lot of yak shaving. But I realized that DNS, as a technology, actually has a nice solution already. It is designed for heirarchical delegation, after all — so all that’s needed is to delegate some to Digital Ocean.

To do this, at my main registrar, I set up NS records for home.example.org:

home » ns2.digitalocean.com.
home » ns3.digitalocean.com.
home » ns1.digitalocean.com.

Now, any requests for home.example.org (or names further under that, like smart.home.example.org) will be sent to Digital Ocean, leaving the rest of example.org where it is. That way, only the systems related to this will be managed by Digital Ocean, which saves me work and also means that if my Digital Ocean API key is compromised, they can only mess with these hostnames, not my whole domain.

I won’t go into the details of setting up the DNS challenge with Digital Ocean… but here’s the pratical config. It’s entirely done in the systemd units — you don’t need anything else except your dns_digitalocean_token in /srv/hass/secret/certbot-creds. The /srv/hass/certbot directory needs to exist, but you don’t need to have any files there to start. (Running the service will create a bunch of 'em, and those are supposed to persist between runs.)

~/.config/systemd/user/container-certbot.service

[Unit]
Description=CertBot container to renew Let's Encrypt certificates
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-abnormal
RestartSec=3600
TimeoutStopSec=60
TimeoutStartSec=10800
ExecStartPre=-/usr/bin/podman secret create certbot-creds /srv/hass/secret/certbot-creds
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                          --cidfile=%t/%n.ctr-id \
                          --cgroups=no-conmon \
                          --rm \
                          --sdnotify=conmon \
                          -a stdout -a stderr \
                          --replace \
                          --label "io.containers.autoupdate=registry" \
                          --name certbot \
                          --volume=/srv/hass/certbot:/etc/letsencrypt:z \
                          --secret=certbot-creds,mode=0400 \
                          certbot/dns-digitalocean -n renew --dns-digitalocean --dns-digitalocean-credentials /run/secrets/certbot-creds --dns-digitalocean-propagation-seconds 3660
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=-/usr/bin/podman secret rm smart.home.example.org.cert 
ExecStopPost=/usr/bin/podman secret create smart.home.example.org.cert /srv/hass/certbot/live/smart.home.example.org/fullchain.pem 
ExecStopPost=-/usr/bin/podman secret rm smart.home.example.org.key
ExecStopPost=/usr/bin/podman secret create smart.home.example.org.key /srv/hass/certbot/live/smart.home.example.org/privkey.pem 

Type=oneshot
NotifyAccess=all

[Install]
WantedBy=default.target

~/.config/systemd/user/container-certbot.timer

[Unit]
Description=Renews any certificates that need them renewed
Requires=container-certbot.service

[Timer]
Unit=container-certbot.service
OnCalendar=weekly

[Install]
WantedBy=timers.target

And run systemctl --user enable --now container-certbot.timer. You can also manually run certbot with systemctl --user start container-certbot.service, which you probably will want to do right away initially. (TODO: you may have to run the container once in non-daemon mode to do the initial setup. I forgot to write this down as I was doing it… will check back and fix this.)

Running this will create a bunch of files under /hass/certbot. If everything is going smoothly, you should be able to basically ignore that forever, because nothing else will ever need to reference it. The systemd service loads the certificates as podman secrets, which you’ll see in the next section.

Run podman secret ls to see. The output should look something like:

ID                         NAME                          DRIVER      CREATED      UPDATED      
15277f7482f758592a59644ca  smart.home.example.org.cert   file        5 days ago   5 days ago   
a230f0479ca6302f9a87835ff  smart.home.example.org.key    file        5 days ago   5 days ago   
cf9857a57f7005e140c491d34  certbot-creds                 file        4 weeks ago  4 weeks ago  

if everything worked.

Ngnix

This is pretty straighforward really. You just need a config file and the systemd unit. There’s not even a web root involved. I might add some stuff to make pretty error pages for when HA is down (or reloading), but that’s a minor todo.

The config:

/srv/hass/nginx/nginx.conf

worker_processes auto;
error_log /dev/stdout info;
pid /run/nginx.pid;

include /usr/share/nginx/modules/mod-stream.conf;

events {
    worker_connections 1024;
}


http {

    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }

    ssl_certificate "/run/secrets/smart.home.example.org.cert";
    ssl_certificate_key "/run/secrets/smart.home.example.org.key";

    server {
        listen       8080;
        listen       [::]:8080;
        server_name  _;

	return 301 https://$host$request_uri;

    }

    server {
        listen       8443 ssl http2;
        listen       [::]:8443 ssl http2;
        server_name  smart.home.example.org;

        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers PROFILE=SYSTEM;
        ssl_prefer_server_ciphers on;

        proxy_buffering off;


        location / {
            proxy_pass http://10.0.2.2:8123;
            proxy_set_header Host $host;
            proxy_http_version 1.1;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
 
       }


    }
}

And I’m using Red Hat’s UBI container with Nginx. This is the same binaries as Red Hat Enterprise Linux but free to use in containers with no strings attached. You could use whatever nginx container you like, really: it doesn’t matter much. But I like this because it has relatively long-term security commitments, making one less thing to worry about.

~/.config/systemd/user/container-nginx.service

[Unit]
Description=Nginx in a container to proxy to Home Assistant
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
Wants=container-certbot.service
After=container-certbot.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=US/Eastern
Restart=on-failure
RestartSec=30
TimeoutStopSec=10
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                          --cidfile=%t/%n.ctr-id \
                          --cgroups=no-conmon \
                          --rm --sdnotify=conmon \
                          --replace \
                          --detatch \
                          --label "io.containers.autoupdate=registry" \
                          --name nginx \
                          --net slirp4netns:allow_host_loopback=true \
                          -p 8080:8080 \
                          -p 8443:8443 \
                          --volume=/srv/hass/nginx/nginx.conf:/etc/nginx/nginx.conf:Z \
                          --secret smart.home.example.org.cert \
                          --secret smart.home.example.org.key \
                          registry.access.redhat.com/ubi8/nginx-120:latest nginx -g "daemon off;"
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

Firewall config:

As noted above, this runs on ports 8443 and 8080. Then, the standard ports are packet-forwarded locally by the firewall. To set this up, run:

sudo firewall-cmd --add-forward-port=port=80:proto=tcp:toport=8080
sudo firewall-cmd --add-forward-port=port=443:proto=tcp:toport=443
sudo  firewall-cmd --runtime-to-permanent

The Actual Home Assistant Container :star: :star: :star:

The main event, right? Actually not a lot to say here. Note that it “Wants” the other containers for Z-Wave and Zigbee rather than “Requires”. That means it expects them to be there, but won’t shut down if not.

Important: this is set up to use a watchdog provided by GitHub - brianegge/home-assistant-sdnotify: systemd service for Home Assistant. This makes it so if Home Assistant fails in some weird way but does not exit, the container will restart. Until you’ve got that set up (or if you decide not to use it), see the note in the comments.

You’ll need to make /srv/hass/hass before starting, but there doesn’t need to be anything there.

~/.config/systemd/user/container-homeassistant.service

[Unit]
Description=Home Assistant Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
Wants=container-zwave.service
Wants=container-zigbee.service
After=container-zwave.service
After=container-zigbee.service


[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=US/Eastern WATCHDOG_USEC=5000000
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
# note: this is using https://github.com/brianegge/home-assistant-sdnotify.
# If not using that, remove the `WatchdogSec` line  and change
# `--sdnotify=container` to `--sdnotify=conmon`.
WatchdogSec=60
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                          --cidfile=%t/%n.ctr-id \
                          --cgroups=no-conmon \
                          --rm \
                          --sdnotify=container \
                          --replace \
                          --detach \
                          --label "io.containers.autoupdate=registry" \
                          --name=homeassistant \
                          --volume=/srv/hass/hass:/config:Z \
                          --network=host \
                          ghcr.io/home-assistant/home-assistant:stable
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

A brief interlude for SELinux and permissions

The next two containers, for Z-Wave and Zigbee, need access to the USB tty devices to actually talk to their controllers. If you are using at least container-selinux-2.176.0, there is a boolean container_use_devices you can use to allow containers to access devices, but that’s broader than I’d really like. So, I set up a custom SELinux module which just allows containers to access USB serial devices. This is still more broad than I’d like (I really want to only allow each specific container to access the _specific_device) but this will do for now. Here’s what to do.

Start with this policy source file:

container-usbtty.te

module container-usbtty 1.0;

require {
	type container_t;
	type usbtty_device_t;
	class chr_file { getattr ioctl lock open read write };
}

#============= container_t ==============
allow container_t usbtty_device_t:chr_file { getattr ioctl lock open read write };

and run:

checkmodule -M -m -o container-usbtty.mod container-usbtty.te
semodule_package -o container-usbtty.pp -m container-usbtty.mod
sudo semodule -i container-usbtty.pp

This will persist; you only need to do it once.

You’ll also need to add your container-user (hass in this example) to the dialout group, which will allow regular Unix-permission access to these devices:

sudo gpasswd -a hass dialout

will do it. (If you’re working as the hass user, you’ll need to log out and in again to get the new supplementary group list).

The Z-Wave container

Note that despite the name, this doesn’t require MQTT. You can configure it, instead, to talk directly to Home Assistant, and that’s what I’ve done since it seems to be the generally-recommended approach.

The directory /srv/hass/zwave needs to exist, but can be empty to start — when you run the service and connect to port 8123 in your web browser, you can configure everything from there.

This config maps the device identifier to /dev/zwave. That way, the zwavejs2mqtt config doesn’t need a long complicated path. Something feels “itchy” to my sense of neatness in how a lot of important configuration is podman command-line parameters. I may look at making a shared wrapper things like device paths can be in a common config file.

~/.config/systemd/user/container-zwave.service

[Unit]
Description=ZWave To MQTT Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=US/Eastern
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                              --cidfile=%t/%n.ctr-id \
                              --cgroups=no-conmon \
                              --rm \
                              --sdnotify=conmon \
                              --replace \
                              --detach \
                              --label "io.containers.autoupdate=registry" \
                              --name=zwave \
                              --group-add keep-groups \
                              --device=/dev/serial/by-id/usb-Silicon_Labs_Zooz_ZST10\u00A0700_Z-Wave_Stick_0001-if00-port0:/dev/zwave:rw \
                              --volume=/srv/hass/zwave:/usr/src/app/store:Z \
                              -p 8091:8091 \
                              -p 3000:3000 \
                              zwavejs/zwavejs2mqtt:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

MQTT Bridge (Mosquitto)

MQTT is a standard messaging protocol used by lots of things in IoT. I initially didn’t want to bother with it, but there’s currently a problem with Home Assistant’s built-in Zigbee support (“ZHA”), and several people suggested that zigbee2mqtt doesn’t have the same issue. So, have exhausted other ideas, I tried it and — that seems to be true. And, unlike the zwave program, zigbee2mqtt doesn’t have a direct way to communicate, so I ended up biting this bullet.

My configuration here is really simple. There’s a lot more I could and probably should do. I definitely want to set up SSL and username/password.

For this one, you need /srv/hass/mosquitto/confg and /srv/hass/mosquitto/data directories pre-created. You could also have /srv/hass/mosquitto/log, but I’ve currently configured it to not log, on the theory that I’ve got plenty of logging on the other side.

In this case, you do need to create the config file before starting the service.

Note: I’m open to suggestions and help for how I should configure this better. Currently, I don’t have it listening on the websockets interface, which I might want to add so I can use a web console tool for debugging.

/srv/hass/mosquitto/config/mosquitto.conf

log_dest stdout
log_type none
allow_anonymous true
listener 1883
persistence true
persistence_location /mosquitto/data/

And the systemd service:

~/.config/systemd/user/container-mosquitto.service

[Unit]
Description=Mosquitto MTTS in a container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=US/Eastern
Restart=on-failure
RestartSec=30
TimeoutStopSec=10
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                          --cidfile=%t/%n.ctr-id \
                          --cgroups=no-conmon \
                          --rm \
                          --sdnotify=conmon \
                          --replace \
                          --detach \
                          --label "io.containers.autoupdate=registry" \
                          --name mosquitto \
                          -p 1883:1883 \
                          --volume=/srv/hass/mosquitto/config:/mosquitto/config:Z \
                          --volume=/srv/hass/mosquitto/data:/mosquitto/data:Z \
                          --volume=/srv/hass/mosquitto/data:/mosquitto/log:Z \
                          eclipse-mosquitto
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

The Zigbee container

This one needs its configuration file created before you start the service the first time. Important: I’m changing the web port to 8092 from the default 8080, because 8080 is used by my Nginx config. (It wouldn’t have to, but I’ve set it up already, so I went this way.) I picked the port one higher than the one zwavejs2mqtt uses. (Reminder: these are blocked from outside-the-server access by the firewall. I intend to eventually also put them behind the ngnix proxy, so all of these port numbers become obscure implementation details you don’t have to worry about later.)

Similar to the Z-Wave configuration, I’ve mapped the controlller to /dev/zigbee inside the container, so you only need to worry about the actual device name from the outside.

I’m logging to both console and file here (the former then goes to journalctl), and I have log level set to debug while I’m setting things up – I will probably reduce that to info once I feel confident about everything.)

The mqtt server is reached on what is defined by default in podman as the local network for containers, which works because of --network=slirp4netns:allow_host_loopback=true in the podman config below. I may work on setting that network in a more sophisticated way. Plus, also, there should be a way to refer to it by name rather than hard-coded IP, but that wasn’t working for me — another TODO!

/srv/hass/zigbee/configuration.yaml

homeassistant:
  legacy_entity_attributes: false
  legacy_triggers: false
frontend:
  port: 8092
  url: http://smart.home.example.org:8092
permit_join: false
mqtt:
  base_topic: zigbee2mqtt
  server: mqtt://10.0.2.2
serial:
  port: /dev/zigbee
advanced:
  homeassistant_legacy_entity_attributes: false
  legacy_api: false
  log_level: debug
  log_output:
    - console
    - file
device_options:
  legacy: false

Once you’re running, zigbee2mqtt itself will add detected devices to this file. But the above is what you need to start.

and…

~/.config/systemd/user/container-zigbee.service

[Unit]
Description=Zigbee To MQTT Container
Wants=network-online.target
After=network-online.target
Requires=container-mosquitto.service
After=container-mosquitto.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=US/Eastern
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
                              --cidfile=%t/%n.ctr-id \
                              --cgroups=no-conmon \
                              --rm \
                              --sdnotify=conmon \
                              --replace \
                              --detach \
                              --label "io.containers.autoupdate=registry" \
                              --name=zigbee \
                              --group-add keep-groups \
                              --network=slirp4netns:allow_host_loopback=true \
                              --device=/dev/serial/by-id/usb-1a86_TubesZB_971207DO-if00-port0:/dev/zigbee:rw \
                              --volume=/srv/hass/zigbee:/app/data:Z \
                              -p 8092:8092 \
                              koenkk/zigbee2mqtt:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

Backups

RIght now, they’re just covered by my desktop system backups. In the future, I’m going to set up something specific for this. You should just be able to back up all of /srv/hass.

There is however one notable exception: if you want to avoid the possibility of backing up home-assistant_v2.db when it’s in a not-settled state (possibly leading to corruption and losing your state history), you should periodically run something like

echo 'vacuum into "/src/hass/backup/home-assistant_v2-bak.db"' | sqlite3 /srv/hass/hass/home-assistant_v2.db 

TODO

There are number of things I want to do yet. Notably:

  1. SSL for the zigbee and zwave web frontends (there are still a number of things more easily done directly than with the HA interface)
  2. SSL for mosquitto
  3. Redo my whole network with unique security keys for zwave and zigbee. Some rainy day. :slight_smile:
  4. Collect all of this into a git repo and/or tar release which you can just clone or unpack to get started. And maybe even a setup script which takes it a step further, using your desired hostnames and proper device paths, setting up the systemd session properly, etc.

Other ideas:

  • Containerized tools for zigbee and zwave controller firmware updates
  • Create a wrapper for all the podman calls so the systemd files can be more boilerplate and the specifics of the setup in a general config file
  • Move to an in-memory sqlite database for home-assistant, copying to disk only periodically. Trade performance for possbility of losing an hour (or whatever) of history
  • figure out how to make HA do a clean shutdown when it gets a SIGTERM from podman (or use a different signal and configure for that?)
  • podman auto-update rollback that is informed by tests run on startup, so that if there’s a breaking change it automatically reverts to known-good

Questions, Comments?

And that’s basically it. Let me know what’s unclear — and especially any suggestions for improvement.

11 Likes

This is great, and I’m following along closely. I actually think I will probably convert my install to the same exact structure you’re using here. But I’d like to ask a few questions.

  1. What role is Nginx playing here?

  2. The idea you have here seems to be that there’s a zwave and a zigbee service that’s separate, and both of those publish to MQTT which is then where home assistant comes in and actually just listens to MQTT messages only. Is that correct?

  3. Are you running Mosquitto / MQTT broker from within the HomeAssistant container as part of home assistant, or is it its own container?

I have not used MQTT with HA before (my setup is pretty simple, controls some lights and sprinklers and has a few sensors for motion detection etc). So far HA is just running Zigbee and ZWave integrations. How does MQTT interface with HA here? Basically are all your automations etc triggered by MQTT message reception rather than say directly by zigbee or zwave events? Did you have to design the whole MQTT message structure, or is there something “built in” to the zwavejs2mqtt and zigbee2mqtt modules?

Thanks for clarifications etc.

1 Like

How do you deal with breaking changes?

  1. I wanted SSL with real certificates. I know you can do this with Home Assistant directly, but that has several downsides. For one thing, every time you refresh your cert, you have to restart HA. So, the idea is that Ngnix is the “public” face (still only on my own network, although you could also expose it to the world if you like). All connections go to that, and then are proxied to the appropriate place. Right now, I only have Home Assistant itself set up in this way, but I plan to add the other services as well for direct access.
  2. Yes. Sort of. For the case of Z-Wave, zwavejs2mqtt is the recommended approach (and the built-in Z-Wave is being dropped). However, despite the name, this program has the ability to talk directly (not via MQTT) to Home Assistant via the HA Z-Wave JS integration. And that is the recommended approach (See docs here.) Now that I have MQTT set up for Zigbee, I might look at whether there are any advantages of having that speak MQTT too… but it’s working so I haven’t dug into that yet.
  3. Mosquitto is in its own container. Config for that coming to the first post soon. :slight_smile: The zigbee2mqtt software doesn’t have a separate HA-specific bridge, so it’s the only option. You could use ZHA (built-in to Home Assitant), but the Ikea button problem didn’t give me the option.

So far, none of them have broken my setup. At this point, the answer is “pay attention and fix if needed”. However, podman-auto-update has a feature where it will roll back a container to the previous version, if a new one fails to start. That currently isn’t aware of multi-container setups, but I think that’s the direction I’ll look for addressing this.

My idea is to create some test automations which exercise both HA and the external containers, and have those run on the Home Assistant startup trigger, and if they fail, exit HA. (Options for that which I am considering; 1) change the systemd config to consider any exit a failure, 2) enhance the core homeassistant.stop integration to be able to exit uncleanly, 3) enhance the sdnotify integration to either have a service telling it to fail, or to check the state of a helper toggle, 4) have something on the host watching MQTT and use that to signal the problems and react.)

I’m thinking rather than Auto-update I’d probably run it without auto update, and then when I want to update, make that modification to use auto-update and restart everything, so updates occur at times when it’s convenient to debug and I’m paying attention :wink:

Maybe even make a button on HA that will send an MQTT “autoupdate” message and then something responds to that such that everything gets rebooted into auto-update mode.

a big part of wanting to go this route is I just don’t want to spend much time doing updates and debugging, so I’m hoping containers give me an easier maintenance path.

Yeah, that’s not a bad idea. Note that you can leave the config as I have it with the auto-update label, but not enable the timer for the service. Then you can run podman auto-update by hand (or with your “something that responds”) when you want to. See also podman auto-update --dry-run.

And in case you didn’t notice: I’ve updated the post with the rest of my container configs, and a few more notes throughout. Hope this is helpful!

I think eventually I’ll gather all of this into a git repo and/or tar release that you can just clone or unpack to get started.

I’m trying to copy this structure into my own git repo. It seems like your zigbee files were copied from zwave and have some incorrect comments and file names at least, can you take a look at that and see if there’s any other things that need adjustment

Or maybe shove all this into a github repo? I’d be happy to clone it, and test it out, provide PRs to fix etc.

Oops sorry. Fixing now…

@dlakelan I think it was actually correct, just had “Zwave” written instead of zigbee. Check now? And okay, yeah, I guess I’ll get around to setting up the git repo :slight_smile:

Ok, so I’ve tried to set this up, and I’m getting:

homeassistant@gluster2:~ $ systemctl --user start container-homeassistant
Job for container-homeassistant.service failed because a timeout was exceeded.
See "systemctl --user status container-homeassistant.service" and "journalctl --user -xeu container-homeassistant.service" for details.

the output of journalctl --user is blank

Note that I have this device behind an HTTP / HTTPS proxy, and the various environment variables are set, but I’m not clear that podman is using the proxy to download the container.

I think part of the issue is my home directory on this machine is on a glusterfs and it can’t pull properly.

I’m going to put the homeasisstant user home directory in /var/local/homeassistant/ and see if works.

1 Like

even though I’ve changed the home directory, when I try to manually podman pull the container it gives:

Error: writing blob: adding layer with blob "sha256:764d2e53e1a607f2d8261522185d5b9021ade3ec1a595664ee90308c00176899": Error processing tar file(exit status 1): Error setting up pivot dir: mkdir /home/homeassistant/.local/share/containers/storage/overlay/ac8f2b70f27b844bfee334d54d3701a739c37ffe619d8a7dbb54260e88b5fe1d/diff/.pivot_root536627039: permission denied

why is it still trying to use /home/homeassistant instead of /var/local/homeassistant?
is that hard-coded in podman or something?

Did it create anything in the /srv/hass/hass directory?

You could also try running podman by hand first. (Leave off the cidfile option because it doesn’t matter here, or replace with /run/user/$UID/container-homeassistant.service.ctr-id.)

Haven’t had a chance to look at it further been running around doing lots of errands.

Wdyt about why is it complaining about setting up pivot dir in /home/homeassistant rather than the proper home directory?

Hoping I can get back to looking at it in an hour or two after dogs and kids are all taken care of.

well, I went in and deleted /home/homeassistant/.local/share/containers and now in addition to the original error I get:

ERRO[0000] Failed to created default CNI network: open /home/homeassistant/.local/share/containers/storage/libpod/defaultCNINetExists: no such file or directory 

but the home directory for the homeassistant user is /var/local/homeassistant

homeassistant@gluster2:~ $ env | grep HOME
HOME=/var/local/homeassistant

So I really don’t know what the hell is trying to do anything in /home/homeassistant anyway, is this a bug where the container makes an assumption about where the home directory should be rather than using $HOME?

That seems unlikely … it’s documented as using $HOME. Maybe something weird is cached somewhere?

What if you add --root /var/local/homeassistant/.local/share/containers/storage between podman and run on the command line?

Just to check – this is on Raspbian? If you’re on a Fedora Linux system, you’ll need to set up SELinux right for a home directory in a non-standard location. (But that’s not the error you’d get.)

Yes it’s on RasPiOS / testing.

I’m going to give it a try with the --root

here’s what I got when I try to pull:

homeassistant@gluster2:~ $ podman --root /var/local/homeassistant/.local/share/containers/storage pull ghcr.io/home-assistant/home-assistant:stable
Error: database libpod root directory (staticdir) "/home/homeassistant/.local/share/containers/storage/libpod" does not match our libpod root directory (staticdir) "/var/local/homeassistant/.local/share/containers/storage/libpod": database configuration mismatch

@mattdm i’m really quite ignorant of podman and docker and such. so I have no idea what “database libpod root directory (staticdir) …” means. I’m guessing there is some way to override this directory and that libpod: database configuration mismatch · Issue #1853 · containers/podman · GitHub is a relevant issue… but I don’t know enough about it all to figure out what the resolution should be. Also that issue is on a much older version of podman.

Oh! What’s in ~/.config/containers?

I think you should be able to just zap everything there. I bet that’s what’s held over.