I want to have two home assistants that link together, one at my home, and one at university. I am yet to set it up, but have the ability to build and test them working together as I have a virtual machine on my PC running Hassio.
I want all the items on my university HA to connect to the home HA. I have read something about a master/slave configuration but have no idea how to set that up. I have also seen bits on MQTT brokers but I have no clue where to even start with that.
I have tried using a custom component called “Remote Homeassistant” but it didn’t do anything.
I have given up on this and have put effort into getting MQTT working (See below)
How you linked them?
Did you configured only one mqtt broker for both instances?
Can you show the configuration.yaml mqtt part of both primal and secondary HA, please?
Very simple to set up, it works very well for now, it was a big feature missing in home assistant.
I hope it will become an official component !
My use case :
My house is in two parts which are separate by the garden. I am using Z-wave network only and the range is too short to get a reliable working network in both parts of the house.
Part 1 of my house (Main HA):
Home assistant running in a Debian VM with docker and Hassio.
Aeotec Stick gen 5
Around 20 z-wave nodes
Part 2 (Remote HA) :
Home assistant running in Orange pi +2E with docker and Hassio.
Z-wave me stick
Around 10 z-wave nodes
Now I can connect them very easily and get all my remote HA entities in my main HA. Big thanks to the component developer !
I was using usbip on a VM (running hassio) and RPI for about 6 months, relatively successfully, but every reboot of hassio needed the rpi to reboot too, and every few occasions would need some kind of tinkering to make it work again. After a recent update, it finally gave up, and I couldn’t get any combinations of things to work. Also tried socat, ser2net etc.
I’m now using home-assistant-remote and I can’t believe I wasted so much time fiddling with usbip!
Fresh install of hassio on the rpi, import the zwave config from the primary hass instance, then create a long lived token.
On the primary instance, drop the component into custom_components (or create that folder if it doesn’t exist), then add your config.
I had to add climate and light into the config, because without zwave on the primary, I have no lights/climate devices, and the primary wouldn’t know to load these components without this.
The biggest benefit to this is that a reboot of the primary doesn’t mean having to reload the whole zwave network, and conversely, a reboot of the rpi means the entities show as unavailable on the primary, but they spring back to life without any manual intervention once the secondary instance is back up.
Hi,
I have one RPi 3B+ with Razberry Z-wave GPIO card that I now want to connect to a new HA instance that I’m running on a server.
I have installed HACS and home-assistant-remote on the server and the config looks like this:
If you have remote domains (e.g. switch), that are not loaded on the master instance you need to add a dummy entry on the master, otherwise you'll get a Call service failed error.
Yeah, it’s easy to get caught out by this. When I was testing, I had it working, but then removed the only light from the main instance, and had to re-read the docs…
Once I get the automation logic figured out, I might do a bit of a write up on the redundancy side of my setup, but briefly, thanks to home-assistant-remote, connecting to secondary instances and failing over to backups is working very well!
I started thinking about redundancy when I realised you can backup the aeotec z-stick - due to the number of zwave devices I now have, I bought a spare thinking I’d apply the config and keep it handy in case it was ever required. Then I realised I may as well keep it plugged into a spare raspberry pi, ready to go.
My setup is a pair of NUCs running Windows server with Hyper-V role. VM running Hass.io. This vm is setup for failover to the other NUC hypervisor.
I then have 2 raspberry pis running hass.io - snapshot of one loaded onto the second, 2 zsticks (backup of the first loaded onto the second). Both have the same IP set (obviously being careful to never have both on simultaneously). Both pis are connected to wifi power sockets (not zwave, for obvious reasons…) so can be powered on (and off, if required) from the main hass.io instance.
Because I’ve loaded a snapshot of one of the rpi hass.io’s onto the second, the long-lived token is the same across both, so if I shut 1 pi down and power up the other, all the zwave devices come back online with no further interaction (there are rare occasions where even with one rpi, I’d need to reboot the main instance—i.e. taken the rpi down for a prolonged period—so that remains the same).
Next step is to work out some automations to check for zwave activity/network health and auto failover!