Connect two Home Assistant instances together

UPDATE:

I have the Mosquito MQTT Broker set up and both instances linked to it.

I also have the “secondary” HA instance state streaming to it.

I am just not sure how to get the main instance to read the states and also be able to control them

You’ve read the instructions I presume?

1 Like

Yeah, I read all the instructions, I’ve just given up on that now and I feel like I’m quite close to getting MQTT working

Ive now got it working using MQTT.

A MQTT light on the main HA that listens to “homeassistant/light/[name]/state”
It also broadcasts to “homeassistant/light/[name]/Statechange”

The secondary HA has an automation which listens to “homeassistant/light/[name]/Statechange”
which then togges the light

This would be an issue if I had a lot of lights/switches but I only have a few so isn’t too bad.

However, if anyone has a better alternative, please let me know

1 Like

How you linked them?
Did you configured only one mqtt broker for both instances?
Can you show the configuration.yaml mqtt part of both primal and secondary HA, please?

Could you provide some help on the remote component?
I set it up on my master and issued a long lived token from the slave but it wont connect.

Do i need to enable websocket_api: on the slave?
Am i using the correct token? or do i need to get another one?

Here’s my code:

remote_homeassistant:
  instances:

  - host: local.ip #no https:// and no port
    port: 8124
    secure: true
    access_token: !secret long_lived_access_token_issued_from_slave
    # api_password: !secret ha_remote_password
    entity_prefix: "prefix_local_"

Also, could this work if the two home-assistant instances are in differernt locations? through their duckdns addresses?

NO WAY !
I will try it ASAP and if it works well it will be damn good !

Very simple to set up, it works very well for now, it was a big feature missing in home assistant.
I hope it will become an official component !

My use case :
My house is in two parts which are separate by the garden. I am using Z-wave network only and the range is too short to get a reliable working network in both parts of the house.

Part 1 of my house (Main HA):

  • Home assistant running in a Debian VM with docker and Hassio.
  • Aeotec Stick gen 5
  • Around 20 z-wave nodes

Part 2 (Remote HA) :

  • Home assistant running in Orange pi +2E with docker and Hassio.
  • Z-wave me stick
  • Around 10 z-wave nodes

Now I can connect them very easily and get all my remote HA entities in my main HA. Big thanks to the component developer !

2 Likes

Can you explain how you did it?
Thank you

How would I do this if the instances were separated using nabu casa and not using a static IP or Duckdns?

I was using usbip on a VM (running hassio) and RPI for about 6 months, relatively successfully, but every reboot of hassio needed the rpi to reboot too, and every few occasions would need some kind of tinkering to make it work again. After a recent update, it finally gave up, and I couldn’t get any combinations of things to work. Also tried socat, ser2net etc.

I’m now using home-assistant-remote and I can’t believe I wasted so much time fiddling with usbip!

Fresh install of hassio on the rpi, import the zwave config from the primary hass instance, then create a long lived token.

On the primary instance, drop the component into custom_components (or create that folder if it doesn’t exist), then add your config.

remote_homeassistant:
  instances:
  - host: !secret secondary_ip
    port: 8123
    access_token: !secret secondary_token
    entity_prefix: "usb_"
    include:
      domains:
      - sensor
      - switch
      - group
      - light
      - climate
    
climate:

light:

I had to add climate and light into the config, because without zwave on the primary, I have no lights/climate devices, and the primary wouldn’t know to load these components without this.

The biggest benefit to this is that a reboot of the primary doesn’t mean having to reload the whole zwave network, and conversely, a reboot of the rpi means the entities show as unavailable on the primary, but they spring back to life without any manual intervention once the secondary instance is back up.

4 Likes

Hi,
I have one RPi 3B+ with Razberry Z-wave GPIO card that I now want to connect to a new HA instance that I’m running on a server.
I have installed HACS and home-assistant-remote on the server and the config looks like this:

remote_homeassistant:
  instances:
  - host: ip
    port: 443
    secure: true
    verify_ssl: false
    access_token: !secret rpi_access_token
    entity_prefix: "zio_"
    include:
      domains:
      - sensor
      - switch
      - light
      - group
      - zwave
    subscribe_events:
    - state_changed
    - service_registered
    - zwave.network_ready
    - zwave.node_event

I can see my light on the server but I can’t turn then on or off the light.
I just got this error.

When I go in to the RPI i can turn on and off the lights.

Have I missed something?

From the documentation https://github.com/lukas-hetzenecker/home-assistant-remote#special-notes:

If you have remote domains (e.g. switch), that are not loaded on the master instance you need to add a dummy entry on the master, otherwise you'll get a Call service failed error.

Isn’t it your situation?
GV

Yeah, it’s easy to get caught out by this. When I was testing, I had it working, but then removed the only light from the main instance, and had to re-read the docs…

Once I get the automation logic figured out, I might do a bit of a write up on the redundancy side of my setup, but briefly, thanks to home-assistant-remote, connecting to secondary instances and failing over to backups is working very well!

I started thinking about redundancy when I realised you can backup the aeotec z-stick - due to the number of zwave devices I now have, I bought a spare thinking I’d apply the config and keep it handy in case it was ever required. Then I realised I may as well keep it plugged into a spare raspberry pi, ready to go.

My setup is a pair of NUCs running Windows server with Hyper-V role. VM running Hass.io. This vm is setup for failover to the other NUC hypervisor.

I then have 2 raspberry pis running hass.io - snapshot of one loaded onto the second, 2 zsticks (backup of the first loaded onto the second). Both have the same IP set (obviously being careful to never have both on simultaneously). Both pis are connected to wifi power sockets (not zwave, for obvious reasons…) so can be powered on (and off, if required) from the main hass.io instance.

Because I’ve loaded a snapshot of one of the rpi hass.io’s onto the second, the long-lived token is the same across both, so if I shut 1 pi down and power up the other, all the zwave devices come back online with no further interaction (there are rare occasions where even with one rpi, I’d need to reboot the main instance—i.e. taken the rpi down for a prolonged period—so that remains the same).

Next step is to work out some automations to check for zwave activity/network health and auto failover!

1 Like

Thanks, I did not read that.

It solved the issue just to add
switch:
In the config

did you get this working harry? I am at the same point as you where here. Not sure if I need to add something to my salve unit.

I did manage to get it to work locally and remotely.
Ill have to check my exact settings and come back to you though.

Oh…please do …very interested.

I this also working over WAN and not LAN only? I have tried a bit with a duckdns address but it doesn’t seem to work