Master HA instance with multiple slaves

I really want this functionality. I’m assuming there is no way to do auto discover with this method?

Autodiscover entity from another HA instance. NO.

iam new 2 HA and need two rpi4 to link together (different loc.)- and for the life of me i cant get this to work
if you have any good tips or a step be step that would be really helpful, and where do you get the secret access token … thank for any help

up date – with some help - found my problem - if your going to use a token (from the slave) delete the the password line and was told i can delete everything under the PW that i would not need it— and it works with one problem i get a error (call for service) when i turn on a light on the master ( a slave light) i added the word switch: to my config still same thing if anyone has any info let me know thanks jeff

@azwildfire, thanks for this. It was very simple to set up and is now well between my house and holiday flat using Zerotier week no open ports.

Cheers!

I actually found command line did not work for me in some situation.

I move to rest integration for switches. Mostly because I couldnt get command line to accept the json data properly which is needed for most switching action

I try mqtt but this never work well for me.

@tmjpugh
Hi, can you please post examples of Restful switch how you implemented these. Documentation is lacking details on this. Thanks

I have RasPi at 192.160.27.52 that I call from main HA instance.
I create rest commands on main HA instance that execute command on RasPi
After rest command is made I add them to switch using template

REST_COMMAND.YAML file

entry_gate_on:
  url: http://192.160.27.52:8123/api/services/cover/open_cover
  method: post
  headers: 
    authorization: 'Bearer mytoken'
    content-type: 'application/json'
  payload: '{"entity_id":"cover.entry_gate"}'
  verify_ssl: false

entry_gate_off:
  url: http://192.160.27.52:8123/api/services/cover/close_cover
  method: post
  headers:
    authorization: 'Bearer mytoken'
    content-type: 'application/json'
  payload: '{"entity_id":"cover.entry_gate"}'
  verify_ssl: false

SWITCH.YAML file

 - platform: template
   switches:
     entry_gate:
       value_template: "{{ is_state('sensor.entry_gate', 'open') }}"
       turn_on:
         service: rest_command.entry_gate_on
       turn_off:
          service: rest_command.entry_gate_off

the template switch shows me getting “entry gate” state from local HA instance.
It was also possible to get directly from remote RasPi instance but I already doing that as separate sensor but until now I never notice unnecessary extra sensor.

1 Like

I am in similar situation and have like 50 devices on HA. could you share setup for master slave instances if you did it?
many thanks

Same here, I am trying to resetup my environment and put zwave, zigbee, and wyze sense on a pi and use a VM as my Master. Does anything like remote_homeassistant need to be setup on the slave or does zwave need to be entered on the Master? I get an error Failed to call service light/turn_off. Service not found.

I advanced a bit but auth is not working.
So you only need config on master, not on slaves. I tested it and it works. Problem for me is no matter what I put in config I can’t get token or password to work. PaSsword is depreciated according to author. Token that I created via long lived token creation in user prefs in ha does not do anything and frankly I do not understand what concepts of ha architecture and exposed and integrated in this plugin.

There is so little info on this in documentation, and assumptions are made but not explained that this great work ends up not doing much for anyone until you figure out something by chance.

Post what you have. Maybe we can help correct it

Did you create the token on the remote instance or master instance?
You need to create it on the remote instance and add something like this in the configuration.yaml of the master instance:

remote_homeassistant:
  instances:
  - host: ip of remote instance
    port: HA port of remote instance
    secure: true
    verify_ssl: false
    access_token: token create on remote instance
    entity_prefix: "instance02_"
2 Likes

Yup you are correct,

Funny, I woke up early around 4:30AM today. While running occurred to me that token that I incorporated was from master and when I got back i incorporated remote auth_token to master instance and dropped secure option to false and viola stuff showed up on zwave panel on master.
Very cool indeed…
For others like me, here is what needs to be done:

  1. clone git repo https://github.com/lukas-hetzenecker/home-assistant-remote.git
  2. in config folder, in my case I am following dev setup (same user) so under .homeassistant extract only portion of cloned folder that starts with custom_components/remote_homeassistant as user that is bootstrapping HA
  3. setup master with simple following configuration that goes into configuration.yaml


remote_homeassistant:
instances:

  • host: x.x.x.x
    port: 8123
    secure: false
    verify_ssl: false
    access_token: Y0MjhlYWI3OTc1YzM1MjU1NWFkMSIsImlhd…
    entity_prefix: “Gajba1”

There is ton more options but I just needed this…

fire up remote HA first, then master and you will see something like this on master instance logs:
Connected to home-assistant websocket at ws://x.x.x.x:8123/api/websocket
After that go to zwave dashboard and you will see all stuff is now exported to master and visible to you to incorporate into polymer gui
Hope this helps others,
Thanks burnigstone and others for helping me …

2 Likes

Thank you for the tips, @Krash. Thanks to your suggestions on removing DuckDNS, I managed to connect to my remote HA. Did you manage to figure out how to activate DuckDNS again? And generally what is happening there? You said that you made it work with DuckDNS in the end?

Hey there.
I must admit I quit playing with this addon a year back.

From what I remember, if I had both my HA instances within the same Lan, I had to disable the duckdns addons for them to work. I believe I got it to work with enabling the duckdns only on the master, leaving rh slave with only local access. I’m guessing that’s cause my router does not support NAT loopback and at the time I had no dnsmasq or pihole running as a dns server. (I must admit that even with my master pi as dns server, I have no idea how the configuration should be done for both of them to work with duckdns access and have the master/slave connection)

When I moved my slave pi to a different location, I enabled duckdns on it and it just worked, with the ips set as stated in the documentation.

I hope this helped a bit. :slight_smile:

Following your advice, I tested them locally with the slave instance having the DuckDNS code in configuration.yaml and it did not work. I commented that code out and it started to work. I moved the slave instance to another network and connected to it via ZeroTier One and the same happened. With DuckDNS code it stops working, without the code it works.

I am not that good with networking and security to understand what you said about NAT loopback issues you had.

I am trying now via DuckDNS but having difficulties with my router and port forwarding. I will update once I succed. Thank you so much!

I don’t know what zero tier one is to be honest :slight_smile:
It worked for me with duckdns on both once the ports were configured and the HAs were accessible from outside networks.

It should also work within the same network with duckdns only on master.

Happy to help

Hi

I also have problem with connection from master to the slave.

I have two HASS instances. One on Ubuntu x86 (master) in Docker and second one (slave) on Pi3 also using Docker.

On slave I created new user and generated LongLived token.

On master I installed the addon and added to config:

remote_homeassistant:
  instances:
  - host: IP
    access_token: !secret xxx_access_token

I tried adding/removing options in the configuration.yaml. Whatever I do I get :

2020-06-07 10:34:10 INFO (MainThread) [custom_components.remote_homeassistant] Connecting to ws://IP:8123/api/websocket
2020-06-07 10:34:11 ERROR (MainThread) [custom_components.remote_homeassistant] Could not connect to ws://IP:8123/api/websocket, retry in 10 seconds...

I have duckdns on both instances, but I disabled it for the testing. It does not make any difference.

Is there something I am missing?

Are you publishing your slave HA at port 8123 and it works to log in via browser at the same address?

Yes. It is on 8123.