Work in progress: configuration for running a Home Assistant in containers with systemd and podman on Fedora IoT

Does the app work for you? When I run it I get a 400 Bad Request error. Similar to:

This seems to be because I have a proxy on my network and even though I list “lan” in the proxy exceptions domains somehow the app is using the proxy anyway.

adding this to my configuration.yaml fixed this issue:

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 10.x.x.x
    - 10.x.x.x
    - fdxx:xxxx:xxxx:1::x
    - fdxx:xxxx:xxxx:1::x

It should be, but I haven’t tried it yet. In theory, what you would do is add

                              --network=slirp4netns:allow_host_loopback=true \

to the config (so it can access the MQTT broker on the localhost), and remove

                              -p 3000:3000 \

and turn off the “Home Assistant: WS Server” option. I think you’d configure zwave2mqtt with tls://localhost for the MQTT server, but I haven’t tested that — see https://zwave-js.github.io/zwavejs2mqtt/#/usage/setup?id=mqtt.

Oh! Yes, I did this too and forgot about it. Thanks for the note — I’ll add it to the instructions tomorrow.

I think

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 10.0.2.2


should be sufficient, but I'll need to test to be sure.

The device my proxy is on has multiple IP addresses so I included all of them that’s why I have 4 entries.

If you’re using the ngnix config from above, what’s happening is:

  • for rootless containers, podman uses slirp4netns to set up a VPN-like network
  • Within that network, 10.0.2.2 is the “virtual router” — it’s an oversimplification, but for most practical purposes you can treat this like the 127.0.0.1 local-loopback address on a regular machine.
  • Correspondingly, you can see that the nginx.conf redirects incoming requests (via proxy_pass) to http://10.0.2.2:8123 — that’s the port that Home Assistant is running on
  • you can, in the above setup without a firewall, connect from an external system to either the ngnix proxy or to Home Assistant diectly — the former on https://hostname:8443 and the latter on http://hostname:8123. This is useful for debugging, but the idea is to a) forward 443 to 8443 with the firewall, and b) block all external connections to ports in the 8000 range. That way, nginx is the only way in from the outside
  • The packges coming to Home Assistant from outside are given their “real” source address’. But the ones that come from the Ngnix proxy appear to come from the container router interface — that 10.0.2.2 “loopback”.
  • That can cause confusion, which is why:
  trusted_proxies:
    - 10.0.2.2

which is why the others should matter.

Note that if you have 10.0.2.x as a possible address range outside of your container setup (like, if you have 10.0.0.0/8 as your local network!), you’ll want change the podman network config to use something else. (And I’m not sure offhand how to do that.)

Nah, I don’t have the Nginx part at all. I just have a squid proxy on my home network to enforce various policies for my kids. Home-assistant is not available externally, I use a wireguard VPN to access the HA instance externally if needed (almost never needed).

1 Like

Minor update: edited the config for the certbot container so the secret certbot-creds are mounted in with permission 0400. (Read by root only.) Somewhere in April the certbot container started checking for that and bailing out.

Wondering what is the deal with Digital Ocean? Why do you require it?

It doesn’t need to be Digital Ocean, but it does need to be a DNS provider which offers an API. And, it turns out that this is a service Digital Ocean offers for free. I’ll edit the main text to clarify.

And the DNS approach has several advantages:

  1. You don’t need to punch an incoming hole through your home firewall for Let’s Encrypt to work.
  2. Simplifies the ngnix config since you don’t have to deal with putting the challenge responses there.
  3. You can get a wildcard cert, covering *.home.example.org.

Is there any actual cumulative description of this whole setup or just this discussion?

The first post is meant to be that! What would make that more clear? What’s missing?

I had to use

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 127.0.0.1

for this to work.

I have the nginx reverse proxy setup the same as you, and it uses 10.0.2.2 there. Did you confirm that using 10.0.2.2 in the trusted_proxies works on your setup?

I rebooted my machine and realized that homeassistant doesn’t start automatically, I have to log in as the “homeassistant” user and start it with systemctl --user start container-homeassistant.

What’s the right way to make this happen automatically at boot time?

1 Like

Oh, good point. Use enable instead of start. I’ll add a section on this.

Yes, it should be the container network not localhost — although if you are using net=host localhost will work too.

1 Like

Am I the only one getting a systemd timeout when starting the homeassistant container?

May 01 16:07:50 server systemd[930]: container-homeassistant.service: start operation timed out. Terminating.

If I set the systemd service to type=simple and comment out the watchdog, then it works.
Using F36.

Do you have the watchdog timer from GitHub - brianegge/home-assistant-sdnotify: systemd service for Home Assistant installed? I did this by starting without it, installing via HACS, and then changing the config. I should provide a version of the systemd unit file that is set up for not that. (Or, possibly, figure out instructions for installing it beforehand?)

1 Like

Ahh, no, I didn’t install it, for now I didn’t see the need to it. That’s probably the issue then, thanks!
I’ll maybe have a look when I can get RF devices to work properly.

1 Like

Seem to have problems when running podman and the HA container.

Recently switched from docker-ce to podman, and i have problem with HA dying on timeout when started from systemd.

[cont-finish.d] done.
[s6-finish] waiting for services.
[finish] process exit code 0
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

This does not happend if i do podman run, so it´s related to the service state atleast.

Any suggestions

Is this using my setup above, or something else? I think we’ll need more logs to figure out what’s going on.