Custom Integration - Nikobus over TCP socket

Hi everyone,

Wishing to understand more about HA and Integration, I took the path to integrate my domotic system “Nikobus” over a custom integration.

While the connection / exchange of data towards the Nikobus controller works well. I’ve difficulties to understand what would be the best way to interact with HA switches, cover, etc… here is my first try

When I query the status of the modules it give me the status of the 6 outputs with status 00 = off or FF = on


$1C6C0E00 0000000000FF D5A6E2

$1C6C0E00 tell the controller which module to interact with
D5A6E2 are CRC

So I extract “00 00 00 00 00 FF” which is the status of all 6 outputs, here with output 6 being on.

I can read / write / change status to Nikobus without any issue. Where I’m strugling to find a good logic is how to link switches in HA to a current state “0000000000FF”.

Current way of thinking
Get the status once at integration setup, so the “0000000000FF” is stored in a json
Than switches get their status from that json dict

Refresh switches status every minutes by re-running the same “data refresh” → “json”. this is needed as an actuator outside HA could have changed the state eg wall switch.

I’m guessing there is / are better way of handling this. Any guidance, advice would be much appreciated.

Here is my draft code

fdebrus/Nikobus-HA (

Thanks !

Just having a quick look at your code (not checking the logic of data handling), i can see you are not using the coordinator correctly. I would advise you change your super().init to include


This way your coordinator will call your refresh data every 60s for you. This refesh data needs to return True if successful and then it will update each entity.

In terms of your specific question, you could just hold this as a binary array and the entities initiated with an index number. They then pull their element of the binary array.

states = bytearray.fromhex(state_groups)

Then states[0], states[1] etx.

To create your switches use a loop to enumerate the states array.

for channel, state in enumerate(states):
    entities.append(NikobusSwitchEntity(hass, dataservice, description, model, addresss, channel, chDescription))

and your is_on in your switch class would be:

def is_on(self):
    """Return true if the switch is on."""
    return self._dataservice.states[channel] != 0

P.s. sorry for all the edits, trying to do on my tablet.

Thank you ! Will implement and share progress.

For the initial setup I get 2 answers the ack of the command $0512 and the result $1C6C0E00000000000000CB56B6

2024-02-20 20:58:36.818 DEBUG (MainThread) [custom_components.nikobus.coordinator] *** REFRESH for 0E6C ***
2024-02-20 20:58:36.819 DEBUG (MainThread) [custom_components.nikobus.nikobus] ----- NikobusApi.get_output_state() enter -----
2024-02-20 20:58:36.819 DEBUG (MainThread) [custom_components.nikobus.nikobus] address = 0E6C, group = 1, timeout = 5
2024-02-20 20:58:36.819 DEBUG (MainThread) [custom_components.nikobus.nikobus] ----- Nikobus.send_command_get_answer() enter -----
2024-02-20 20:58:36.819 DEBUG (MainThread) [custom_components.nikobus.nikobus] command = $10126C0E4E16A4, timeout = 5
2024-02-20 20:58:39.319 DEBUG (MainThread) [custom_components.nikobus.nikobus] Received response:
2024-02-20 20:58:39.320 DEBUG (MainThread) [custom_components.nikobus.nikobus]   1: $0512
2024-02-20 20:58:39.320 DEBUG (MainThread) [custom_components.nikobus.nikobus]   2: $1C6C0E00000000000000CB56B6

Same code but over scheduled refresh, I get 3 answer, which breaks my logic.
I can adapt my logic accordingly but I’ve not idea why the behaviour is different.

2024-02-20 20:59:59.114 DEBUG (MainThread) [custom_components.nikobus.coordinator] *** REFRESH for 0E6C ***
2024-02-20 20:59:59.114 DEBUG (MainThread) [custom_components.nikobus.nikobus] ----- NikobusApi.get_output_state() enter -----
2024-02-20 20:59:59.114 DEBUG (MainThread) [custom_components.nikobus.nikobus] address = 0E6C, group = 1, timeout = 5
2024-02-20 20:59:59.114 DEBUG (MainThread) [custom_components.nikobus.nikobus] ----- Nikobus.send_command_get_answer() enter -----
2024-02-20 20:59:59.114 DEBUG (MainThread) [custom_components.nikobus.nikobus] command = $10126C0E4E16A4, timeout = 5
2024-02-20 21:00:01.614 DEBUG (MainThread) [custom_components.nikobus.nikobus] Received response:
2024-02-20 21:00:01.614 DEBUG (MainThread) [custom_components.nikobus.nikobus]   1: $0512$1C9483000000000000005D252B
2024-02-20 21:00:01.614 DEBUG (MainThread) [custom_components.nikobus.nikobus]   2: $0512
2024-02-20 21:00:01.614 DEBUG (MainThread) [custom_components.nikobus.nikobus]   3: $1C6C0E00000000000000CB56B6


await NikobusDataCoordinator.refresh_nikobus_data(coordinator)

in your init to:

await coordinator.async_config_entry_first_refresh()

and see if that makes any difference. This runs your refresh_nikobus_data function. Not sure if passing coordinator to your funciton is causing an issue as it seems more like an api issue (maybe holding onto data somewhere?)

The other option is call this twice in your init file and see if the second time gives the same 3 part response. This will more likely then mean its an api issue.

I have changed to your suggestion, no impact.

First run, consistent accross all modules and stable, ACK + DATA

Received response: [‘$0517’, ‘$1C6C0E00000000000000CB56B6\r’]

Then over next refresh, I randomly get other format of data. I need to change my API management…

Received response: [‘$0517$1C074700000000000000981112\r$0512$1C07470000000000000098111’, ‘2\r’, ‘$0512’, ‘$1C6C0E00470000000000690756\r’]

The ACK is mixed with data, sometimes not, … weird, but nothing that I cannot manage over some code logic.

Ok great. Glad you can get it working

self._nikobus_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
data = self._nikobus_socket.recv(64).decode()

Is it not that the socket and read command could better manage \r ? or I have to code it myself in my logic. Harvesting the docs… :slight_smile:

I could just do that but not elegant
messages = data.split(‘\r’)

Got everything to work as it should, but when the integration is added to HA. the overall HA platform become slow.

My current guess is when data refresh is happening, but that should not block HA from responding.

The refresh reads many components of the Nikobus and last for about 30secs which is expected, but over those 30secs, HA has performance impact.

Still looking into the matter, but If someone would have some direction, much appreciated !

thanks again,

the latest code is on my github
fdebrus/Nikobus-HA (

I would highly suspect your socket is blocking the loop and prevenfing other jobs getting execution time… I am not an expert on async sockets but a quick google shows that you can set a non blocking option on the socket. Not sure if this works for your implementation.


Failing that look for more examples of sockets with async.

1 Like

Situation has improved using setblocking(False), thanks for that.

Further reading suggest that using asyncio which is the core of HA might be better suited than socket.

Will try and benchmark both options

asyncio performs better than socket for my use case. Perfomance issue is now resolved. thank you again !

I’m now looking at the best way to update switch and light statuses.

As my integration start, it loads the status of all modules in a JSON in memory.

I’m strungling to understand, dispate reading the docs again and again. The difference betweek is_on / update / async_update.

The way I understand it, is that update or async_update (if blocking routine behind the call) shall query the latest status and update is_on to true or false, well and similar for light with brightness as an extra.

my questions are
1- At integration load time, is update triggered and then it shall be the routine to load the initial state from the json towards entity status.

2- As the JSON gets an update by external factor from HA, eg I press a button on a wall that turn on a light. That trigger a function in HA that update the JSON, but then how to tell HA to refresh impacted entity.

Thanks for your guidance once more, just a non developer learning ha integration as a hobby…

check the DEV channel might this help fdebrus/Nikobus-HA at dev (

So the bug was in the function that returns the state of the switch or light,… I was so focusing on the entities definition that I forgot to triple check the output of the function.

It’s now resolved with correcting the function and adding the below to the entities definition

def should_poll(self):
    """Return True if the entity should be polled for updates."""
    return True

Sorry for late reply.

Ok, so i think the confusion is maybe the number of different ways to do this (and you seem to be using 2 ways together). As you are using a data update coordinator (and i would always recommend that), lets address how that works.

So you instantiate your data update coordinator in (correct) and call coordinator.async_config_entry_first_refresh(). This then runs the update routine you have defined in your coodinator. If that completes successfully, it then calls all the listeners to update. In your entities, this would be _handle_coordinator_update. On each update interval it runs the coordinators update function and again, all the entities _handle_coordinator_update functions.

Therefore to ensure each of your entities (lights, covers, switches etc) update, you add this function (as below) to each entity class and use it to set your self._state variable.

Example is your switch

def _handle_coordinator_update(self) -> None:
    """Handle updated data from the coordinator."""
    self._state = bool(self._dataservice.get_switch_state(self._address, self._channel))

In addition, when using a data update coordinator, should_poll should be set to false (is by default therefore just remove property) and you do not need any of the added to hass functions to setup listeners or async_update functions as this is all done for you.

Also, in your, your async_update_data should not be nested inside the init function and therefore the super().init should use update_method=self.async_update_data

Hope that helps.

Thanks !

I’ve implement your recommandation in my dev channel, so the updates are done over the datacoordinator. works well and simplify a lot the code !

I think there is a better way of doing this, the api needs the coordinator, the coordinator needs the api…
could not find a better way.

async def async_setup_entry(hass: core.HomeAssistant, entry: config_entries.ConfigEntry) → bool:

api = await Nikobus.create(hass, connection_string)

coordinator = NikobusDataCoordinator(hass, api)

api.coordinator = coordinator

Also, when the update is triggered from an external factor, eg button press, the update with this method is taking must longer, 5 to 10 secs vs immediate with the previous code. looking into the debug to understand why

Happy to hear your thoughts

I think I have a race condition, if I disable the refresh

# update_interval=timedelta(seconds=120), # Defines how often data should be updated.

Then a physical button press refresh immediatly the entity state. If refresh is enable, a button press refresh the state of the entity after 8 to 10 secs.

Checking now on how to resolve

So if you press a button, does the api call a callback function in your coordinator? If so, at the end of that function you can add


And that will tell all the entities to update. Keep your update interval set though and use that for every 2 min refresh.

Ok, just had another look at your code. How i would do this is.

Have a create function in your coodinator like this - ill explain the params below:

async def connect(self):
    self.api = await Nikobus.create( connection_string, self.async_event_handler)

Then in async_setup_entry just have

coordinator = NikobusDataCoordinator(hass)
success = await coodinator.connect()

Equally if you make your api create function return a success or failure, you can test for this on setup and raise a ConfigNotReady exception which will make the integration retry its setup on a repeating backed off basis without you having to code anything.

if not success:
    raise ConfigEntryNotReady

Create an event handler function in your coodinator.

async def async_event_handler(self, event):
    """Handles events from the api"""

    Do any data manipulation here


In your api, store this callback function parameter as self.event_callback or something. You see this passed to its create function above. Then when you get an event update in your api, you can just call

await self.event_callback(event_details)

This way, you dont need to pass the coordinator to your api as it will have a reference to the callback function which will then tell the entities to update when an event comes in.

I did see you are firing a hass event in your api but i would also replace that with your event handler callback then the api is self contained and all interaction is direct with your coordinator. Will make things much easier to maintain and debug.

As another note, you still havent dedented your async_update_data function in your coordinator. I would again do this for ease.


Oh and 1 more thing to make your code smaller. You can remove all your switch, light, cover functions etc from your coodinator if in your entity classes instead of

self._dataservice.get_switch_state(self._address, self._channel)

You use

self._dataservice.api.get_switch_state(self._address, self._channel)

as your coodinator has a reference to your apj.

1 Like

Thank you ! It did the trick, no more issue on the horizon. Stable and performing so far, waiting on other tester feedback.

I will have to deep dive a bit on cover logic as it works fine from HA but not from Homekit. weird… will debug over the coming weekend.

Again, thanks for all your advice for my first integration :slight_smile: much appreciated !