Master HA instance with multiple slaves

I removed “subscribe_events” from config and it started working!

This looks great, im currently trying to get 2 google assistant account to control diffrent things in my homeassitant.
but in this way, i can set up a new homeasstant, and link to my origninal.
Then set up the 2nd google assistant to the 2nd homeassitant, and in that way, just let my 2nd GA just get access to 15of my entities in my current HomeAssistant.

So that my GA have all entitites, but the other GA just have one appartment.

Will have to try this

Hi All,

I am trying to run this on a local network with the following config on master:

remote_homeassistant:
  instances:
  - host: 192.168.86.136
    port: 8123
    access_token: *******
    entity_prefix: "2ndRpi_"

I get the entities from my second instance showing with the correct state after a reboot, however nothing updates or can control anything on the remote instance, I get the below error every time:

Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File “/config/custom_components/remote_homeassistant.py”, line 206, in _recv
callback(message)
File “/config/custom_components/remote_homeassistant.py”, line 291, in fire_event
state = message[‘event’][‘data’][‘new_state’][‘state’]
TypeError: ‘NoneType’ object is not subscriptable

Any ideas what the issue could be?
Thanks!

I currently have (4) HA instances in my home serving different area and different purpose.

I used statestream/eventsteam/mqtt in past but found it unreliable. Remote RasPi may/may not connect after server reboots to mqtt server. Frustrating for critical things like, accessing your house.

I now use API and template Integrations.
My HA main instance runs in docker on main server. I use this as zwave hub as well. Frontend and all Integrations are also here.

I have RasPi that have specific single purposes. I do not use frontend for these at all but did some basic setup of frontend like add the sensor or buttons these device have. NO automation run on these and updates are 100% optional since these are literally set and forget

I have found that templating these devices result in faster connection and.complete reliability. If the remote devices are running the action or sensors are gauranteed to show in frontend.

All automations run from my main instance.
This provide single interface for all devices. Ultimately.these device have there own tab in frontend due to importance(one controls theatre, another access control, sprinklers, etc)

MQTT is nice as I have some esp8266 that use this without issue. I just find that connecting multiple HA instance it adds a weird complexity and reduces reliability.

Would you mind sharing your configurations? I am looking to do something similar and an example would be great.

My remote instances are just the default install. I only enable needed components and no modification to lovelace as I rarely if ever use their frontend

below I provide example of two from two different remote HA instances.

########################################################
#                 COMMAND LINE                         #
########################################################
- platform: command_line
  switches:
    projector_jvc:
      command_on: '/usr/bin/curl -X POST -H "Authorization: Bearer <mytoken>" -H "Content-Type: application/json" http://192.88.10.93:8123/api/services/hdmi_cec/power_on'
      command_off: '/usr/bin/curl -X POST -H "Authorization: Bearer <mytoken>" -H "Content-Type: application/json" http://192.88.10.93:8123/api/services/hdmi_cec/standby'
      command_state: '/usr/bin/curl -X GET -H "Authorization: Bearer <mytoken>" -H "Content-Type: application/json" http://192.88.10.93:8123/api/states/media_player.hdmi_0'
      value_template: '{{ value_json.state == "on" }}'
      friendly_name: Projector
    southgate:
      command_on: '/usr/bin/curl -X POST -H "Authorization: Bearer <mytoken>" http://192.60.27.52:8123/api/services/cover/close_cover'
      command_off: '/usr/bin/curl -X POST -H "Authorization: Bearer <mytoken>" http://192.60.27.52:8123/api/services/cover/open_cover'
      command_state: '/usr/bin/curl -X GET -H "Authorization: Bearer <mytoken>" http://192.60.27.52:8123/api/states/cover.south_gate'
      value_template: '{{ value_json.state == "closed" }}'
      friendly_name: South Gate
1 Like

I really want this functionality. I’m assuming there is no way to do auto discover with this method?

Autodiscover entity from another HA instance. NO.

iam new 2 HA and need two rpi4 to link together (different loc.)- and for the life of me i cant get this to work
if you have any good tips or a step be step that would be really helpful, and where do you get the secret access token … thank for any help

up date – with some help - found my problem - if your going to use a token (from the slave) delete the the password line and was told i can delete everything under the PW that i would not need it— and it works with one problem i get a error (call for service) when i turn on a light on the master ( a slave light) i added the word switch: to my config still same thing if anyone has any info let me know thanks jeff

@azwildfire, thanks for this. It was very simple to set up and is now well between my house and holiday flat using Zerotier week no open ports.

Cheers!

I actually found command line did not work for me in some situation.

I move to rest integration for switches. Mostly because I couldnt get command line to accept the json data properly which is needed for most switching action

I try mqtt but this never work well for me.

@tmjpugh
Hi, can you please post examples of Restful switch how you implemented these. Documentation is lacking details on this. Thanks

I have RasPi at 192.160.27.52 that I call from main HA instance.
I create rest commands on main HA instance that execute command on RasPi
After rest command is made I add them to switch using template

REST_COMMAND.YAML file

entry_gate_on:
  url: http://192.160.27.52:8123/api/services/cover/open_cover
  method: post
  headers: 
    authorization: 'Bearer mytoken'
    content-type: 'application/json'
  payload: '{"entity_id":"cover.entry_gate"}'
  verify_ssl: false

entry_gate_off:
  url: http://192.160.27.52:8123/api/services/cover/close_cover
  method: post
  headers:
    authorization: 'Bearer mytoken'
    content-type: 'application/json'
  payload: '{"entity_id":"cover.entry_gate"}'
  verify_ssl: false

SWITCH.YAML file

 - platform: template
   switches:
     entry_gate:
       value_template: "{{ is_state('sensor.entry_gate', 'open') }}"
       turn_on:
         service: rest_command.entry_gate_on
       turn_off:
          service: rest_command.entry_gate_off

the template switch shows me getting “entry gate” state from local HA instance.
It was also possible to get directly from remote RasPi instance but I already doing that as separate sensor but until now I never notice unnecessary extra sensor.

1 Like

I am in similar situation and have like 50 devices on HA. could you share setup for master slave instances if you did it?
many thanks

Same here, I am trying to resetup my environment and put zwave, zigbee, and wyze sense on a pi and use a VM as my Master. Does anything like remote_homeassistant need to be setup on the slave or does zwave need to be entered on the Master? I get an error Failed to call service light/turn_off. Service not found.

I advanced a bit but auth is not working.
So you only need config on master, not on slaves. I tested it and it works. Problem for me is no matter what I put in config I can’t get token or password to work. PaSsword is depreciated according to author. Token that I created via long lived token creation in user prefs in ha does not do anything and frankly I do not understand what concepts of ha architecture and exposed and integrated in this plugin.

There is so little info on this in documentation, and assumptions are made but not explained that this great work ends up not doing much for anyone until you figure out something by chance.

Post what you have. Maybe we can help correct it

Did you create the token on the remote instance or master instance?
You need to create it on the remote instance and add something like this in the configuration.yaml of the master instance:

remote_homeassistant:
  instances:
  - host: ip of remote instance
    port: HA port of remote instance
    secure: true
    verify_ssl: false
    access_token: token create on remote instance
    entity_prefix: "instance02_"
2 Likes

Yup you are correct,

Funny, I woke up early around 4:30AM today. While running occurred to me that token that I incorporated was from master and when I got back i incorporated remote auth_token to master instance and dropped secure option to false and viola stuff showed up on zwave panel on master.
Very cool indeed…
For others like me, here is what needs to be done:

  1. clone git repo https://github.com/lukas-hetzenecker/home-assistant-remote.git
  2. in config folder, in my case I am following dev setup (same user) so under .homeassistant extract only portion of cloned folder that starts with custom_components/remote_homeassistant as user that is bootstrapping HA
  3. setup master with simple following configuration that goes into configuration.yaml


remote_homeassistant:
instances:

  • host: x.x.x.x
    port: 8123
    secure: false
    verify_ssl: false
    access_token: Y0MjhlYWI3OTc1YzM1MjU1NWFkMSIsImlhd…
    entity_prefix: “Gajba1”

There is ton more options but I just needed this…

fire up remote HA first, then master and you will see something like this on master instance logs:
Connected to home-assistant websocket at ws://x.x.x.x:8123/api/websocket
After that go to zwave dashboard and you will see all stuff is now exported to master and visible to you to incorporate into polymer gui
Hope this helps others,
Thanks burnigstone and others for helping me …

2 Likes

Thank you for the tips, @Krash. Thanks to your suggestions on removing DuckDNS, I managed to connect to my remote HA. Did you manage to figure out how to activate DuckDNS again? And generally what is happening there? You said that you made it work with DuckDNS in the end?