2022-03-09 23:26:14 WARNING (Recorder) [homeassistant.components.recorder.util] The system could not validate that the sqlite3 database at //config/home-assistant_v2.db was shutdown cleanly
2022-03-09 23:26:14 WARNING (Recorder) [homeassistant.components.recorder.util] Ended unfinished session (id=75 from 2022-03-09 23:24:14.615788)
2022-03-09 23:26:25 ERROR (SyncWorker_0) [homeassistant.components.dhcp] Cannot watch for dhcp packets: [Errno 1] Operation not permitted
The home assistant container service file has:
[Unit]
Description=Home Assistant Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
Wants=container-zwave.service
Wants=container-zigbee.service
After=container-zwave.service
After=container-zigbee.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=America/Los_Angeles WATCHDOG_USEC=5000000
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
# note: this is using https://github.com/brianegge/home-assistant-sdnotify.
# If not using that, remove this and change --sdnotify=container to
# --sdnotify=conmon
#WatchdogSec=60
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=container \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=homeassistant \
--volume=/var/local/homeassistant/.homeassistant:/config:Z \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
I’m not clear on what it should have regarding the --sdnotify settings etc.
I’ve just modified it to =conmon
if I understand it correctly that’s the right choice unless I’ve set up some integration inside the home-assistant
Hmmm. None of those should be fatal.
The warnings are just because of the previous crash.
DHCI ended up disabling the DHCP integration in the config (done by removing the default_config:
line and listing everything you do want manually. According to DHCP Discovery - Operation not permitted · Issue #62188 · home-assistant/core · GitHub, adding --cap-add=CAP_NET_RAW
should fix that. (But note that there have been security vulnerabilities that use this capability to break out of containers in the past. Theoretically shouldn’t happen… but DHCP discovery doesn’t really add much in my opinion, so I just left it disabled.)
I wonder if it’s just taking longer to start on the RPi than systemd defaults to? Try putting something like TimeoutStartSec=600
in the systemd config (on the line above TimeoutStopSec
).
For sdnotify/conmon: without the watchdog, those options should look just like those for the Zwave, Mosquitto, and Zigbee containers. Basically the whole podman command line down to --name
should be identical.
ok, by doing --sdnotify=conmon
it seems to be starting and staying up!
I’ve figured out how to get to the zigbee UI, how do I get to the zwave UI so I can start discovering my devices etc?
Same IP address, port 8091.
Ok, opened the port, am connected. trying to add some devices… It shows me:
This probably doesn’t seem optimal. What next?
Never mind, I figured out from here: Z-wave js network key
Hmm… it doesn’t seem to be doing MQTT discovery in the same way for zwave… I’ve paired the devices… but don’t see them in HA.
Ooh, I was going to say “this is getting into general setup questions”, but actually there is something key which is important to my setup! It’s actually the recommended approach (from others on this forum), but is a little surprising — despite the name “zwavejs2mqtt”, the setup doesn’t actually use MQTT for Zwave
Instead, it uses a dedicated websockets connection on port 3000. (See -p 3000:3000
in the config — and note that it doesn’t require the mosquitto container or enable connections to the container localhost network with --network=slirp4netns:allow_host_loopback=true
as the zigbee container config does.)
So, whatcha want to do here is: in the Settings GUI for zwave, find “Home Assistant”, and make sure WS Server is On, and set to port 3000. Optionally, you can disable MQTT Gateway — because we don’t have the network set up for it, that’s not working anyway, and despite what it says about “use only as control panel”, the Home Assistant websockets server takes care of that.
Once you have done that, go back to Home Assistant and enable the Z-Wave JS integration. That should make everything work.
Great will give it a try tomorrow. Yeah once everything is being seen in home assistant then I’m good with the rest of setup. I just don’t really have any understanding of how the containers fit together.
Is it possible to use MQTT on the zwave stuff? I kinda like the idea of having one interface for all the devices
Does the app work for you? When I run it I get a 400 Bad Request error. Similar to:
This seems to be because I have a proxy on my network and even though I list “lan” in the proxy exceptions domains somehow the app is using the proxy anyway.
adding this to my configuration.yaml fixed this issue:
http:
use_x_forwarded_for: true
trusted_proxies:
- 10.x.x.x
- 10.x.x.x
- fdxx:xxxx:xxxx:1::x
- fdxx:xxxx:xxxx:1::x
It should be, but I haven’t tried it yet. In theory, what you would do is add
--network=slirp4netns:allow_host_loopback=true \
to the config (so it can access the MQTT broker on the localhost), and remove
-p 3000:3000 \
and turn off the “Home Assistant: WS Server” option. I think you’d configure zwave2mqtt with tls://localhost
for the MQTT server, but I haven’t tested that — see https://zwave-js.github.io/zwavejs2mqtt/#/usage/setup?id=mqtt.
Oh! Yes, I did this too and forgot about it. Thanks for the note — I’ll add it to the instructions tomorrow.
I think
http:
use_x_forwarded_for: true
trusted_proxies:
- 10.0.2.2
should be sufficient, but I'll need to test to be sure.
The device my proxy is on has multiple IP addresses so I included all of them that’s why I have 4 entries.
If you’re using the ngnix config from above, what’s happening is:
- for rootless containers, podman uses slirp4netns to set up a VPN-like network
- Within that network,
10.0.2.2
is the “virtual router” — it’s an oversimplification, but for most practical purposes you can treat this like the127.0.0.1
local-loopback address on a regular machine. - Correspondingly, you can see that the nginx.conf redirects incoming requests (via
proxy_pass
) tohttp://10.0.2.2:8123
— that’s the port that Home Assistant is running on - you can, in the above setup without a firewall, connect from an external system to either the ngnix proxy or to Home Assistant diectly — the former on https://hostname:8443 and the latter on http://hostname:8123. This is useful for debugging, but the idea is to a) forward 443 to 8443 with the firewall, and b) block all external connections to ports in the 8000 range. That way, nginx is the only way in from the outside
- The packges coming to Home Assistant from outside are given their “real” source address’. But the ones that come from the Ngnix proxy appear to come from the container router interface — that 10.0.2.2 “loopback”.
- That can cause confusion, which is why:
trusted_proxies:
- 10.0.2.2
which is why the others should matter.
Note that if you have 10.0.2.x as a possible address range outside of your container setup (like, if you have 10.0.0.0/8 as your local network!), you’ll want change the podman network config to use something else. (And I’m not sure offhand how to do that.)
Nah, I don’t have the Nginx part at all. I just have a squid proxy on my home network to enforce various policies for my kids. Home-assistant is not available externally, I use a wireguard VPN to access the HA instance externally if needed (almost never needed).
Minor update: edited the config for the certbot
container so the secret certbot-creds
are mounted in with permission 0400
. (Read by root only.) Somewhere in April the certbot container started checking for that and bailing out.
Wondering what is the deal with Digital Ocean? Why do you require it?
It doesn’t need to be Digital Ocean, but it does need to be a DNS provider which offers an API. And, it turns out that this is a service Digital Ocean offers for free. I’ll edit the main text to clarify.
And the DNS approach has several advantages:
- You don’t need to punch an incoming hole through your home firewall for Let’s Encrypt to work.
- Simplifies the ngnix config since you don’t have to deal with putting the challenge responses there.
- You can get a wildcard cert, covering
*.home.example.org
.
Is there any actual cumulative description of this whole setup or just this discussion?
The first post is meant to be that! What would make that more clear? What’s missing?