Master HA instance with multiple slaves

Hey there.
I must admit I quit playing with this addon a year back.

From what I remember, if I had both my HA instances within the same Lan, I had to disable the duckdns addons for them to work. I believe I got it to work with enabling the duckdns only on the master, leaving rh slave with only local access. I’m guessing that’s cause my router does not support NAT loopback and at the time I had no dnsmasq or pihole running as a dns server. (I must admit that even with my master pi as dns server, I have no idea how the configuration should be done for both of them to work with duckdns access and have the master/slave connection)

When I moved my slave pi to a different location, I enabled duckdns on it and it just worked, with the ips set as stated in the documentation.

I hope this helped a bit. :slight_smile:

Following your advice, I tested them locally with the slave instance having the DuckDNS code in configuration.yaml and it did not work. I commented that code out and it started to work. I moved the slave instance to another network and connected to it via ZeroTier One and the same happened. With DuckDNS code it stops working, without the code it works.

I am not that good with networking and security to understand what you said about NAT loopback issues you had.

I am trying now via DuckDNS but having difficulties with my router and port forwarding. I will update once I succed. Thank you so much!

I don’t know what zero tier one is to be honest :slight_smile:
It worked for me with duckdns on both once the ports were configured and the HAs were accessible from outside networks.

It should also work within the same network with duckdns only on master.

Happy to help

Hi

I also have problem with connection from master to the slave.

I have two HASS instances. One on Ubuntu x86 (master) in Docker and second one (slave) on Pi3 also using Docker.

On slave I created new user and generated LongLived token.

On master I installed the addon and added to config:

remote_homeassistant:
  instances:
  - host: IP
    access_token: !secret xxx_access_token

I tried adding/removing options in the configuration.yaml. Whatever I do I get :

2020-06-07 10:34:10 INFO (MainThread) [custom_components.remote_homeassistant] Connecting to ws://IP:8123/api/websocket
2020-06-07 10:34:11 ERROR (MainThread) [custom_components.remote_homeassistant] Could not connect to ws://IP:8123/api/websocket, retry in 10 seconds...

I have duckdns on both instances, but I disabled it for the testing. It does not make any difference.

Is there something I am missing?

Are you publishing your slave HA at port 8123 and it works to log in via browser at the same address?

Yes. It is on 8123.

Ok. I got it working…So:

By default connection is not secure. I had to add to my config and it is working.

secure: true 

duckdns components is enabled and is also working.

Just in case it helps anyone, I had to add webhooks_api: in configuration.yaml on the remote host. So far its working like a champ for my zha network entities. Is there anyway to devices? I would like to maintain my automation’s on the master instance, none of my remote buttons are showing up(guessing this is normal) Considering creating binary sensors on the remote to monitor on the master, anyone have any thoughts?

With remotes I’d rather work with events instead of entities with states. There’s a chance that there are already events fired on the machine that has the remotes when a button is pressed.

Either way its going to be some work to get them all up and running. Just so much easier in the UI to build it right off the device. I’ll put my big boy pants on and get to work lol

May I ask why you need this? Do you have a separate instance for devices that are too far away from the main instance?

Basically it boils down to 1 the need to learn something new, 2 my zigbee2mqtt install was really lagging more and more on my main instance, the cc2351 would randomly drop out requiring disconnection and reinsertion, then restarting the zigbee2mqtt add-on. I had a raspbee and RPI4 laying around so why not. Any time there was a problem with the lights the WAF went way down. Nothing more frustrating then the lights not coming on from the switch in the dark of early morning when you need to get ready for work.

But then you could install DeCONZ on the Pi and connect your Master instance to it through the network. That’s how I do it to use a ConBee stick with two instances, one for development/testing and one for production.

Yeah I had pure deconz before I went to zigbee2mqtt, I have a few devices that deconz was not supporting so I just opted to switch, figured I would try something new…again

i tried i am getting the following error

this is my config


remote_homeassistant:
  instances:
  - host: 192.xxxxxxxxxxx
    port: 8123
    secure: true
    verify_ssl: false
    access_token: xxxx from remote pi
    entity_prefix: "instance02_"

also one more thing on remote system i am using latest version of hassio and on master i am using 0.96
TIA

IGNORE GOT IT TO WORKING AFTER SETTING SECURE:FALSE

installation and configuration lis easy like one, two, three. For now, every thing works fine. But some improvements would be nice. Like when i trigger switch on my main HA from main HA interface, in log i can see “plug %plugname% powered on by user %username%” (or so, i use russian interface). But if i triggered plug on remote HA using main HA interface, in log i can see only “plug %plugname% powered on”.

I can’t for the life of me get it to work. I have this code in the configuration file of the master:

remote_homeassistant:
  instances:
  - host: 192.168.1.101
    port: 8123
    secure: false
    entity_prefix: "hytte_pi_"

I have no security what so ever, if somebody manages to get into my cabin network this would be the least of my worries. :wink: In the master I have a sensor named “hytte pi remote connection 192 168 1 101 8123” with this info:

192.168.1.101
port
8123
secure
false
verify ssl
true
entity prefix
hytte_pi_

So it seems like there’s contact, but there are no new devices surfacing in the master. Does anybody have any idea what the problem can be? Is there something I need to activate on the slave?

You should not have to activate anything. Check in the logs if you can see anything there.

OK, this was embarassing… :woozy_face: I didn’t even think about checking the log on the master since the sensor said that I was connected! Turns out that it will connect but not populate without an access token or an API password, which the log stated in no uncertain terms. So I created a token and now I’m both connected and populated. Thanks a lot for the heads-up! :+1:

1 Like

Right, that sounds reasonable in some sense. I guess the API doesn’t accept null authentication which is good from a security perspective. The component should however not allow this configuration in the first place. One of access key or password should be required. I would appreciate if you opened an issue about it over at GitHub so I can fix it. Just refer to this post and tag me. Thanks! :+1: