Work in progress: configuration for running a Home Assistant in containers with systemd and podman on Fedora IoT

I had to use

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 127.0.0.1

for this to work.

I have the nginx reverse proxy setup the same as you, and it uses 10.0.2.2 there. Did you confirm that using 10.0.2.2 in the trusted_proxies works on your setup?

I rebooted my machine and realized that homeassistant doesn’t start automatically, I have to log in as the “homeassistant” user and start it with systemctl --user start container-homeassistant.

What’s the right way to make this happen automatically at boot time?

1 Like

Oh, good point. Use enable instead of start. I’ll add a section on this.

Yes, it should be the container network not localhost — although if you are using net=host localhost will work too.

1 Like

Am I the only one getting a systemd timeout when starting the homeassistant container?

May 01 16:07:50 server systemd[930]: container-homeassistant.service: start operation timed out. Terminating.

If I set the systemd service to type=simple and comment out the watchdog, then it works.
Using F36.

Do you have the watchdog timer from GitHub - brianegge/home-assistant-sdnotify: systemd service for Home Assistant installed? I did this by starting without it, installing via HACS, and then changing the config. I should provide a version of the systemd unit file that is set up for not that. (Or, possibly, figure out instructions for installing it beforehand?)

1 Like

Ahh, no, I didn’t install it, for now I didn’t see the need to it. That’s probably the issue then, thanks!
I’ll maybe have a look when I can get RF devices to work properly.

1 Like

Seem to have problems when running podman and the HA container.

Recently switched from docker-ce to podman, and i have problem with HA dying on timeout when started from systemd.

[cont-finish.d] done.
[s6-finish] waiting for services.
[finish] process exit code 0
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

This does not happend if i do podman run, so it´s related to the service state atleast.

Any suggestions

Is this using my setup above, or something else? I think we’ll need more logs to figure out what’s going on.

Hi

Using your runtime in podman but os ubuntu.
Figured out i could bypass the problem with changing

--sdnotify=container \

To

--sdnotify=conmon \

Do you have the same issue as a few posts up? Work in progress: configuration for running a Home Assistant in containers with systemd and podman on Fedora IoT - #77 by mattdm

Just wanted to mention that now everything works fine, thanks for the initial write-up.
I just added a node-red container (I still haven’t really understood hass way to automate things) and the only thing I’d like to review is the whole networking between containers/local host/network.

1 Like

Yeah, and with Podman 4.0 there’s a whole new networking stack by default, so I have to figure that out.

I’m also looking at a project called quadlet, which should simplify the systemd unit files significantly.

1 Like

If your networking equipment doesn’t offer such a service, you could look into this:

$ dnf info mdns-repeater
Last metadata expiration check: 0:00:05 ago on Wed 11 May 2022 11:55:55 AM CDT.
Available Packages
Name         : mdns-repeater
Version      : 1.11
Release      : 5.fc35
Architecture : x86_64
Size         : 24 k
Source       : mdns-repeater-1.11-5.fc35.src.rpm
Repository   : fedora
Summary      : Multicast DNS repeater
URL          : https://github.com/kennylevinsen/mdns-repeater
License      : GPLv2+
Description  : mdns-repeater is a Multicast DNS repeater for Linux. Multicast DNS
             : uses the 224.0.0.51 address, which is "administratively scoped" and
             : does not leave the subnet.
             : 
             : This program re-broadcasts mDNS packets from one interface to other
             : interfaces.

This will allow mDNS resolution (and therefore, finding Chromecasts) across VLANs.

1 Like

@mattdm , the thing that lead me to this thread is trying to get zwavejs2mqtt in podman on Centos 8 Stream working.

And… I’m having no luck. zwavejs can’t open the serial port no matter what I’ve tried.

I’ve got container_use_devices turned on, I’ve even set selinux to permissive, my normal user can access the serial port as verified by stty, I’m using “–group-add keep-groups”, I’ve even set the /dev/ttyUSB0 to mode 666 and tried running the container as root, all with no luck. Inside the container stty fails with EPERM as well.

My next step is to try to turn on auditing for accesses to /dev/ttyUSB0 to see if I can get more information about what’s not happy, but if that doesn’t shed any light I’m running out of ideas.

Any thoughts about where my hangup might be?

I’m just trying on the command line right now, so no units yet. Here’s my command:

podman run --rm -it -p 8091:8091 -p 3000:3000 \
--device=/dev/serial/by-id/usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_88e1d0665594eb11943836703d98b6d1-if00-port0:/dev/zwave:rw \
--mount type=volume,src=zwavejs2mqtt,target=/usr/src/app/store \
--group-add keep-groups \
--name zwavejs \
zwavejs/zwavejs2mqtt:latest

Hmmm, I think you’ve covered the bases. I guess I’d double check all of them before I’d do much more. Maybe you have, but, you know. :slight_smile:

Do you see SELinux AVCs in permissive mode when you try the access from inside the container?

I’ve checked things a few times. --privileged finally gets access to the serial port, but the device mapping doesn’t come through as specified on the command line. For some reason /dev/ttyUSB0 comes through instead of /dev/zwave being created. I don’t get why, but this is my first foray into device passthrough.

I do see some AVCs, but I think unrelated. Looks like something in the container is trying to run iptables?

type=AVC msg=audit(05/12/2022 16:00:57.596:4453) : avc:  denied  { ioctl } for  pid=39100 comm=iptables path=/var/lib/containers/storage/overlay/f9ba98f13481e19eb879bba20eff4e4b8967d141224c40ccbc014b1d3db81853/merged dev="overlay" ino=50416736 scontext=unconfined_u:system_r:iptables_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0:c667,c847 tclass=dir permissive=1

I dare say I may have found a bug. This is podman 4.0.2 on Stream… so not a shocker I guess. I’ll take this over to the podman Github issue tracker.

Thanks for looking, and I hope mdns-repeater works for you if you try it out!

Edit: Welp… looks like I may have been right about the bug, but it also looks like I wasn’t the first to notice.

Bugfixes

  • Fixed a bug where devices added to containers by the --device option to podman run and podman create would not be accessible within the container.

you can’t use the /dev/serial/by-id path because that is a symlink, and podman won’t create the underlying device for this.

use this command with readlink to follow the device and map it into your pod

 podman run -d --name zwavejs -v /home/pi/zwavejs:/usr/src/app/store --device $(readlink -f /dev/serial/by-id/usb-Silicon_Labs_Zooz_ZST10_700_Z-Wave_Stick_0001-if00-port0):/dev/zwave --security-opt label=disable --annotation run.oci.keep_original_groups=1 -p 3000:3000 -p 8091
:8091 --replace zwavejs/zwavejs2mqtt

remember to replace it with your device.

you don’t actually need privileged as long as you are in the correct groups.

The problems I had were due to a bug in the (then) latest version of podman, not due to symlinks or group membership. Moving to an older version of podman fixed everything for me.