Parallel redundant installs?

Is there a process developed to run 2 parallel instances on separate hardware and be able to fail over from a primary instance to a secondary?

My thought is have two devices (2ea HA Greens, or 2ea RPI5’s, or 2ea mini PC’s) and each one would run HA. The primary one would be updated by myself and the changes would be backup up and copies also to the second computer. If the primary fails, the secondary would start right up and become the primary.

Anyone ever try this?

There’s been a few threads about high-availability / 2 node clusters but AFAIK nobody has actually made this work. There’s no functionality in Home Assistant for it and you would run into issues for anything using a controller directly connected to the primary node (Zigbee or Z-wave in particular).

It would be easier though still non-trivial to do this if your mesh networks are using Ethernet / WiFi based controllers that are not directly connected to the Home Assistant instance.

In truth, it would be a lot of hassle for very little gain. Home Assistant is in general very reliable and if you’re worried about hardware failures, well then you’re just doubling your chances.

The better way forward is to be taking regular backups and at best keeping spare hardware around ready to be setup quickly and restored to. I’ve done that before and it’s less than a day to get things up and running again.

I know @Quindor over at IntermitTech did a series on something similar, but it’s been awhile… and I haven’t kept up any progress.

One approach to this is setting up two hypervisors, install as a virtual machine and then set up replication.

Another approach is to do the same but instead of replication you can run the workload on shared storage and set up high availability on it.

In the first scenario you will get one VM and one replicated VM. In the second you will only have 1 VM.

If you use Zigbee both setups would need a ethernet based coordinator.

Depending on the RPO you will lose some minutes of data probably.

Edit: Probably some cool ways to do this containerized also, but that’s not my strength.

1 Like

I think that project is sitting under his Ultimate Desk.

2 Likes

As you’re hearing, a Rpi is not up to task for high availability (what you’re asking for).

@fleskefjes has the right idea but I also agree that this:

Is absolutely true.

With good DR procedures and testing your backup on occasion you can have your restore back on iron or a VM in less than 30 minutes without all the hassle of High Availability. (yes I’ve actually done both many times, I keep a spare M.2 with a recent HA build image on it for just such an occasion…)

Do you REALLY need 5-9’s uptime? (hint most corporations don’t either and it’s very expensive.)

2 Likes

It is really not the Pis that are the issue, but rather that much hardware and many protocols are a one-to-one connection.
USB device is always one-to-one, so these will always be single point of failure.
Ethernet version of Zigbee/Z-wave will also often be one-to-one and therefore again a single point of failure.

The ethernet version of Zigbee/Z-wave will remove the device from HA, but running two HAs will often still not be able to keep the two HA instances in sync.
Adding a broker in between, like MQTT, can make it one-to-many, but also add another layer that then can be a single point of failure.

Matter is trying to tackle one of these single point of failure, but Matter is still a thing for the near future.

1 Like

Ok,

How about this:

I have HA running on a machine on my IoT network. I run an isolated subnet that only houses a second machine. I then have just enough ports open to put backups from the first HA machine onto the backup machine. That way the backup system does not try connecting to devices but has a local copy of my latest backup. If I have a hardware failure on the primary machine, I restore the latest backup to the backup machine, swap peripherals and LAN cables and start running. I would assign IP address by LAN port so minimal settings adjustment.

If you want replication set it up at a hypervisor level, not on a HA green level.

Also important to think about, replication does not equal backup. Isolation of backup is totally fine (and recommended), replication not so.

1 Like

I was thinking to backup to my backup machine so all I have to do is restore using one of the backups I know to be good.

Like I’ve discussed in a previous thread, the use case is totally valid but you don’t want to do that on appliances like the Green. Get something that can run a hypervisor and replicate and high-availability your heart out. If you want to be hardware independent you need to virtualize.

I use a Proxmox cluster and regularly replicate across 2 nodes. My Z-wave and Zigbee controllers are both TCP connected. I’ve never had an automated HA failover, but I’ve migrated the VM between nodes in the cluster for host maintenance without a hitch. I would expect the automated migration would go just as well. In my case the host nodes are different architectures (AMD & Intel) so no “live” migration is supported; HA simply stops the node and starts it on the other node. For Home Assistant (or other VMs I have setup this way like mosquitto and AdGuard) it has never been a problem.

If you have host nodes of the same architecture and sufficient replication bandwidth or NAS then live migration is supported by Proxmox without missing a beat.

hahaha, not even! And like @digrediddew mentioned I am indeed running HA-HA as I called it, revamping the cluster soon even. In this case HA-HA stands for Highly Available - Home Assistant or if you want Home Assistant - Highly Available.

My method is fairly simple though and doesn’t require shared storage or massively expensive Enterprise based technologies. Create a Proxmox cluster with 3 nodes (can be 2 more powerful and one less powerful just for quorum node) and setup ZFS replication within it, every 15 minutes a snapshot gets taken from your HA VM and is available on both nodes, if one node fails for whatever reason the other node automatically boots up the HA VM again and worst case it has 15 minute older data, generally not an issue with something that regulators your house.

Now this doesn’t cover all situations. I don’t have Zigbee or Zwave devices for instance, everything here is WiFi connected and thus I don’t have to deal with Zigbee sticks, mesh network controllers and such. So it might not be a great solutions for others.

1 Like

I run this setup with Zigbee and Z-wave controllers connected via TCP to a Raspberry Pi 3B. Works like a champ. I have 2 RPi’s with this configuration (Production and Development) and both have been very reliable. One RPi also act as the Quorum node for Proxmox.

1 Like

What controllers are you using? That may work for me.

A HomeSeer Z-Net for Z-wave and a Conbee II for Zigbee. The Z-Net is a RPi 3B with a Z-wave HAT daughter card. The Conbee II is just plugged into the RPi USB port. Any USB Z-wave or Zigbee stick can be plugged into a RPi 3B or 4 and configured with ser2net to connect via TCP.