Hi,
Long time Home Assistant user. Originally via Docker on Unraid, then Hassio on a couple of RPis (both of which got corrupted SD Cards) and currently running in a Docker on a dedicated laptop with Ubuntu 16.04.
I have a bit of everything in my current config - ZWave, Hue, Xiaomi, IP Cams, MQTT, ESP8266, and a bunch of integrations.
I want to get to a point of having things as clean as possible…get rid of as much cloud dependency as possible, simplify my integrations, and make it so that if something breaks - it’s easy to get back up and running.
I’ll have a i5 NUC coming available soon, have ample RPis and a rock-solid Unraid environment to choose from. I’m leaning towards the NUC for purposes of isolation, with the RPis being for some of the ancillary functions (PiHole, MotionEyeOS, etc), but I have a feeling doing something virtualized on the Unraid might be the best for the long term.
Anyone have personal experience on this they’d be willing to share? In some ways, I feel paralyzed by all the potential options, along with their respective pros & cons, that lie in front of me.
Personally, I would run on the NUC in Docker so if something goes wrong, another container is easy to spin up and get running again. Additionally, isolating the hardware provides a layer of protection itself. Also, I would consider simplifying integrations. I am personally attempting to trim down to mostly WiFi with a little RF to fill in the blanks (door sensors, motion sensors, etc). My Hue seems to screw with my 2.4 WiFi at times, so I am working on making it disappear. But what it really comes down to is what you are most comfortable with. You are the one who has to manage it and live with it. Do what is best for you.
Rebuild speed, well, that’s largely Docker where possible. On a Pi however you have another option and that’s rpi-clone (well, as long as you’re not using HassOS). That allows you to snapshot your Pi to another SD card, and recovery then is a matter of swapping the SD card over. I use that for all my Pi systems, and when my Plex server started to show signs of failure (2 years or so in, with a lot of logging going on) it took less than 10 minutes to recover.
Backups of your config need to be automated, or you won’t take them. You also want to ensure that your backups aren’t on the system you’re backing up, or they’re not really backups I’m a fan of having at least one backup locally (in the same network, not the same computer) and one remotely (eg on the cloud). I’d also highly recommend that those backups support versioning, so that you can recover more than just the most recent (possibly corrupt) file.
Beyond that, it’s a trade off. Reducing the number of integrations means that you have less things likely to break, but also means that if something breaks the impact is larger. You’ll probably benefit though from simplifying things at the protocol level, for example moving to use one of the direct Zigbee integrations (zha, deCONZ, or Zigbee2MQTT) rather than both Hue and Xiaomi. That’ll also cut down on the clutter of the 2.4 GHz spectrum.
I personally run a mix of WiFi/network, Z-Wave, Zigbee, MQTT, and even cloud based integrations. I’ve had no stability issues with any of that. Individual parts may glitch at times, but that’s the nature of technology, and Home Assistant stays stable. My primary HA instance runs in a VM (not Docker, venv install for me), and the other services run in other VMs or on Pis.