"zha-toolkit" - Toolkit providing low and high level Zigbee commands through ZHA/zigpy

@le_top , Thanks for the tips, unfortunately I cant see any ServiceCall zha_custom.execute in the log may be I am doing the debug bit wrong, The good news is that after changing the developer tools to yaml mode, i can now see the back up at custom_components/zha_custom/local/nwk_backup.json.

Something that is not clear now is that how do I restore this backup to a new coordinator, do I just pull out my CC2531 and insert my Sonoff CC2652P and call Restore? all I can see in the read me is to remove the key and insert the new key which is not clear.

Many thanks for your help.

Well, that’s good progress.

That you do not see anything in the logs is likely related to the log configuration in HA itself.

By key, I mean “USB Key” as the TI-ZNP hardware is usually a USB device.

So basically, once you have the backup file, the restore will use the file from the same location.
So by physically replacing the TI-ZNP key with the new destination key, you can restore to the destination key.

For safety, I recommend to restart HA after putting in the new key. And restart it again after the restore.

When you restore, you’ll see that another backup is made (of the key that is being restored to).

Note that you’re the first doing the actual restore this way. If the restore “fails” you should be able to use the previous key.

It’s also best that your backup is not too old - the restore method allows the definition of an offset for the TX counter which is probably used to avoid replays - end devices may not accept packets with a counter that is too small. This offset is 2500 by default, but if you have many devices that amount of packets can easily be reached in a day (it’s “only” 100 frames per hour, or 1.5 each minute).

The method indicated above to set the log level is probably lost after restart, so you need to redo it after restart.

1 Like

@le_top ,Thanks a lot for your help, I will test it out over the weekend, I can see “frame_counter”: 4008842 in my backup may be I will go with that value.

Best regards .

Hi,
Thanks for your work on this. I have installed this and enabled the logger debug. When I try to call

service: zha_custom.execute
data:
  command: znp_backup

I get the following error:

Logger: homeassistant.helpers.script.websocket_api_script
Source: custom_components/zha_custom/znp.py:38
Integration: ZHA Custom Service ([documentation](https://github.com/mdeweerd/zha_custom), [issues](https://github.com/mdeweerd/zha_custom/issues))
First occurred: 10:14:26 AM (3 occurrences)
Last logged: 10:20:12 AM

websocket_api script: Error executing script. Unexpected error for call_service at pos 1: 'ControllerApplication' object has no attribute '_znp'

Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 381, in _async_step await getattr(self, handler)() File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 584, in _async_call_service_step await service_task File "/usr/src/homeassistant/homeassistant/core.py", line 1495, in async_call task.result() File "/usr/src/homeassistant/homeassistant/core.py", line 1530, in _execute_service await handler.job.target(service_call) File "/config/custom_components/zha_custom/__init__.py", line 61, in custom_service await handler( File "/config/custom_components/zha_custom/znp.py", line 38, in znp_backup backup_obj = await backup_network(app._znp) AttributeError: 'ControllerApplication' object has no attribute '_znp'

I am running HASS OS in a Proxmox VM and have the latest versions of Core, Supervisor and Host.
thanks

The ‘_znp’ field does not exist, suggesting that your zigbee coordinator is not using zigpy-znp .

Yup, just checked and my HUSBZB-1 is not zigpy-znp. thanks

There is a backup script for this, it should be doable to integrate this, but maybe somebody with a device could do that: bellows/backup.py at acf47f6939c28b20b22d5715646e6f1a0bd70064 · zigpy/bellows · GitHub .

The backup calls works fine for me, I just tried the restore command after upgrading the firmware on my coordinator (it wiped the data). I think the restore call only looks for “nwk_backup.json”, so if you use a custom file name like “nwk_backup_20220107.json”, it throws an error. I renamed the file and did another restore and I got the following in my logs:

2022-01-07 15:33:31 INFO (MainThread) [custom_components.zha_custom] Running custom service: <ServiceCall zha_custom.execute (c:9a7d3a5f91bc24e161808b9d82257dea): command=znp_restore>
2022-01-07 15:33:31 DEBUG (MainThread) [custom_components.zha_custom] module is <module ‘custom_components.zha_custom’ from ‘/config/custom_components/zha_custom/init.py’>
2022-01-07 15:33:42 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall zha_custom.execute (c:9a7d3a5f91bc24e161808b9d82257dea): command=znp_restore>
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/core.py”, line 1511, in catch_exceptions
await coro_or_task
File “/usr/src/homeassistant/homeassistant/core.py”, line 1530, in _execute_service
await handler.job.target(service_call)
File “/config/custom_components/zha_custom/init.py”, line 61, in custom_service
await handler(
File “/config/custom_components/zha_custom/znp.py”, line 109, in znp_restore
await app._znp.pre_shutdown()
AttributeError: ‘ZNP’ object has no attribute ‘pre_shutdown’

DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:15 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x2DC7:1:0x0300]: async_initialize: all attempts have failed: [TimeoutError(), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:15 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x2DC7:1:0x0006]: async_initialize: all attempts have failed: [TimeoutError(), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:22 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x1B42:1:0x0300]: async_initialize: all attempts have failed: [DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:22 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x1B42:1:0x0008]: async_initialize: all attempts have failed: [DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:22 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x1B42:1:0x0006]: async_initialize: all attempts have failed: [DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:41:56 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x2DC7:1:0x0702]: async_initialize: all attempts have failed: [DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]
2022-01-07 15:42:02 WARNING (MainThread) [homeassistant.components.zha.core.channels.base] [0x1B42:1:0x0702]: async_initialize: all attempts have failed: [DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’), DeliveryError(‘Request failed after 5 attempts: <Status.NWK_INVALID_REQUEST: 194>’)]

  1. Yes, the file has to be called exactly ‘nwk_backup.json’ for the restore. Maybe I’ll add the possibility to set the exact file name later (I now know how to add more parameters than just “command_data”).
  2. The restore has apparently been executed (but I forgot to ‘await’ the asynchronous command).
  3. The last step ‘pre_shutdown’ was not found. That is strange because its in the original cli code and it exists in the common Controller implementation .

I added ‘await’ now, maybe that helps.

  1. The errors may berelated to the restore and a need to restart.

For all commands: I modified the code so that the ‘ieee’ parameters can also receive the short network address or the entity name.

1 Like

Once I get some time I’ll re-pair some devices, create a new backup, reset the coordinator and then try a restore and see if it is successful

Is the nvram_reset command support in your build?

1 Like

Ok, I added the NVRAM related operations: backup, restore and reset. I only tested backup.
I added backup to the restore and reset operations for safety.
(The commands are znp_nvram_backup, znp_nvram_restore, znp_nvram_reset - added to the README as well).

2 Likes

We’re in business! I didnt get to try out the nvram reset command just because the restore worked on the first try :slight_smile:
This was my environemnt:
Backup → lower firmware version (backup contained 4 sengled bulbs, 1 Ikea signal repeater, and 1 Smart outlet)
ZHA → Deleted both the integration and zigbee.db, rebooted and installed a fresh instance of ZHA with no devices
Restore → Issued restore command from my backup and restarted (the below is roughly what I saw in the log)

“Write done, call pre_shutdown(). Restart the device/HA after this.”
“Write done, call pre_shutdown(). Restart the device/HA after this.”
2022-01-07 15:33:31 DEBUG (MainThread) [custom_components.zha_custom] module is <module ‘custom_components.zha_custom’ from ‘/config/custom_components/zha_custom/init .py’>
2022-01-07 15:33:42 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall zha_custom.execute (c:9a7d3a5f91bc24e161808b9d82257dea): command=znp_restore>
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/core.py”, line 1511, in catch_exceptions
await coro_or_task
File “/usr/src/homeassistant/homeassistant/core.py”, line 1530, in _execute_service
await handler.job.target(service_call)
File “/config/custom_components/zha_custom/init .py”, line 61, in custom_service
await handler(
File “/config/custom_components/zha_custom/znp.py”, line 109, in znp_restore
await app._znp.pre_shutdown()
AttributeError: ‘ZNP’ object has no attribute ‘pre_shutdown’

On Restart → I saw one bulb join and then after a few minutes a total of 3 bulbs and 1 signal repeater joined. 1 bulb and 1 smart outlet did not join automatically but I just power cycled those devices for a few seconds and they were added back.

So far it seems to be working from what I have tested!

2 Likes

Great news.

Regarding the procedure (to know if/how the README.md should be updated):

  • is it essential to add and remove the integration and the database?

[Edit: I amended them any way and added a link to your post]

Regarding the delay of “rejoining” the network: the zigbee specifications requires devices to “back-off” when they have difficulties to communicate in order not to flood the network. I believe the request is to wait 15 minutes to try again, but not all devices do this the same way.

The ‘pre_shutdown()’ call is apparently a precaution to shutdown the database correctly in this case. I added an extra debug message to show all methods in the ZNP instance and in my case it lists ‘pre_shutdown’ . (I am not trying to call that method as I don’t want to mess too much with my home network).

Anyway, your test confirms that the user can avoid setting up a command line interface (cli) on another computer to restore his ZNP network which will make the operation easier to perform for most users.

Thanks for testing!

1 Like

To make a daily backup for ZNP devices, I added the following automation to my setup.
It will backup to ‘local/nwk_backup_DAY.json’ and ‘local/nvram_backup_DAY.json’ .

alias: Daily ZNP Backup - Monthly rotation
description: Backup ZNP Zigbee configuration, monthly rotation
trigger:
  - platform: time
    at: '04:00'
condition: []
action:
  - service: zha_custom.execute
    data:
      command: znp_backup
      command_data: '{{ now().strftime("_%d") }}'
  - service: zha_custom.execute
    data:
      command: znp_nvram_backup
      command_data: '{{ now().strftime("_%d") }}'
mode: restart
1 Like

I don’t believe it’s necessary to deleted the db and integration as that will lead to having to rename all the devices. I can test again by just doing a backup, nvram reset and restore and seeing if that works.

If you can test it, I am sure there will be some grateful people.

I saw a video where using the CLI interface, the data was migrated from one coordinator to another and there was no talk of removing the database.
The integration was disabled though and the port and zigbee device type was changed. But in this case I don’t think that disabling the integration will make it work.

I suppose that this could even be more automated in the case of migrating from/to devices with different “ports”. But first, the other backup and restore methods need to be added.

Yeah prior to this, if you wanted to make a backup on HA, the ZHA integration needed to be disabled and you needed to ssl into HA to access the coordinator (he other option is using another environment altogether). For ZHA_CUSTOM, ZHA needs to be enabled otherwise the commands do not work.

I just did a nvram reset and here are the logs (everything looks okay, nvram backup was created successfully as well)

2022-01-08 09:09:45 INFO (MainThread) [custom_components.zha_custom] Running custom service: <ServiceCall zha_custom.execute (c:d0c61e312e531a68ba6ddcde5e90dec6): command=znp_nvram_reset>
2022-01-08 09:09:45 DEBUG (MainThread) [custom_components.zha_custom] module is <module ‘custom_components.zha_custom’ from ‘/config/custom_components/zha_custom/init.py’>
2022-01-08 09:09:45 INFO (MainThread) [custom_components.zha_custom.znp] Reading NVRAM from device
2022-01-08 09:09:59 INFO (MainThread) [custom_components.zha_custom.znp] Saving NVRAM to ‘/config/custom_components/zha_custom/local/nvram_backup_20220108_090945.json’
2022-01-08 09:09:59 INFO (MainThread) [custom_components.zha_custom.znp] NVRAM backup saved to ‘/config/custom_components/zha_custom/local/nvram_backup_20220108_090945.json’
2022-01-08 09:09:59 INFO (MainThread) [custom_components.zha_custom.znp] Reset NVRAM

Rebooted and did an NVRAM Restore (devices did not come back online after a restart)

2022-01-08 09:19:12 INFO (MainThread) [custom_components.zha_custom] Running custom service: <ServiceCall zha_custom.execute (c:9aba205011eaf778e1f71555824cc4d0): command=znp_nvram_restore>
2022-01-08 09:19:12 DEBUG (MainThread) [custom_components.zha_custom] module is <module ‘custom_components.zha_custom’ from ‘/config/custom_components/zha_custom/init.py’>
2022-01-08 09:19:12 INFO (MainThread) [custom_components.zha_custom.znp] Reading NVRAM from device
2022-01-08 09:19:27 INFO (MainThread) [custom_components.zha_custom.znp] Saving NVRAM to ‘/config/custom_components/zha_custom/local/nvram_backup_20220108_091912.json’
2022-01-08 09:19:27 INFO (MainThread) [custom_components.zha_custom.znp] NVRAM backup saved to ‘/config/custom_components/zha_custom/local/nvram_backup_20220108_091912.json’
2022-01-08 09:19:27 INFO (MainThread) [custom_components.zha_custom.znp] Restoring NVRAM from ‘/config/custom_components/zha_custom/local/nvram_backup.json’
2022-01-08 09:19:27 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall zha_custom.execute (c:9aba205011eaf778e1f71555824cc4d0): command=znp_nvram_restore>
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/core.py”, line 1511, in catch_exceptions
await coro_or_task
File “/usr/src/homeassistant/homeassistant/core.py”, line 1530, in _execute_service
await handler.job.target(service_call)
File “/config/custom_components/zha_custom/init.py”, line 73, in custom_service
await handler(
File “/config/custom_components/zha_custom/znp.py”, line 180, in znp_nvram_restore
nvram_obj = json.load(f)
File “/usr/local/lib/python3.9/json/init.py”, line 293, in load
return loads(fp.read(),
io.UnsupportedOperation: not readable

Reboot and did a network restore

2022-01-08 09:26:45 INFO (MainThread) [custom_components.zha_custom] Running custom service: <ServiceCall zha_custom.execute (c:6d70b0fa91d29d99c0be404ce7905b91): command=znp_restore>
2022-01-08 09:26:45 DEBUG (MainThread) [custom_components.zha_custom] module is <module ‘custom_components.zha_custom’ from ‘/config/custom_components/zha_custom/init.py’>
2022-01-08 09:26:55 INFO (MainThread) [custom_components.zha_custom.znp] Restore from ‘/config/custom_components/zha_custom/local/nwk_backup.json’
2022-01-08 09:26:55 INFO (MainThread) [custom_components.zha_custom.znp] Validating backup contents
2022-01-08 09:26:55 INFO (MainThread) [custom_components.zha_custom.znp] Backup contents validated
2022-01-08 09:26:55 INFO (MainThread) [custom_components.zha_custom.znp] Writing to device
2022-01-08 09:27:34 DEBUG (MainThread) [custom_components.zha_custom.znp] List of attributes/methods in znp [‘class’, ‘delattr’, ‘dict’, ‘dir’, ‘doc’, ‘eq’, ‘format’, ‘ge’, ‘getattribute’, ‘gt’, ‘hash’, ‘init’, ‘init_subclass’, ‘le’, ‘lt’, ‘module’, ‘ne’, ‘new’, ‘reduce’, ‘reduce_ex’, ‘repr’, ‘setattr’, ‘sizeof’, ‘str’, ‘subclasshook’, ‘weakref’, ‘_app’, ‘_config’, ‘_listeners’, ‘_port_path’, ‘_skip_bootloader’, ‘_sync_request_lock’, ‘_uart’, ‘_unhandled_command’, ‘_znp_config’, ‘callback_for_response’, ‘callback_for_responses’, ‘capabilities’, ‘capture_responses’, ‘close’, ‘connect’, ‘connection_lost’, ‘connection_made’, ‘detect_zstack_version’, ‘frame_received’, ‘load_network_info’, ‘network_info’, ‘node_info’, ‘nvram’, ‘remove_listener’, ‘request’, ‘request_callback_rsp’, ‘reset’, ‘set_application’, ‘version’, ‘wait_for_response’, ‘wait_for_responses’, ‘write_network_info’]
2022-01-08 09:27:34 INFO (MainThread) [custom_components.zha_custom.znp] Write done, call pre_shutdown(). Restart the device/HA after this.
2022-01-08 09:27:34 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall zha_custom.execute (c:6d70b0fa91d29d99c0be404ce7905b91): command=znp_restore>
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/core.py”, line 1511, in catch_exceptions
await coro_or_task
File “/usr/src/homeassistant/homeassistant/core.py”, line 1530, in _execute_service
await handler.job.target(service_call)
File “/config/custom_components/zha_custom/init.py”, line 73, in custom_service
await handler(
File “/config/custom_components/zha_custom/znp.py”, line 109, in znp_restore
await app._znp.pre_shutdown()
AttributeError: ‘ZNP’ object has no attribute ‘pre_shutdown’

After the network restore, the following devices came back online immediately
1/4 sengled A19 RGB bulbs
1/1 Ikea Signal Repeater
1/1 Smart Outlet

After waiting about 5 minutes, the rest of the sengled bulbs came back online!

Conclusion: Deletion of the ZHA integration and Zigbee.db is not needed AND a power cycle of mains powered devices is not necessary (I dont have any battery powered devices to test with but I assume those would rejoin if you wake them manually). See procedure below:

The procedure should be the following (follow “a” steps if migrating to a new coordinator):

  1. Backup using the znp_backup command in the zha_custom service. Verify that the nwk_backup.json file is generated in the local directory
  2. Remove the original coordinator from your system. Insert the new Coordinator.
    2a) Remove the ZHA Integration from Home Assistant (only needed for migrating to a new coordinator with a different serial path; the alternative is to modify HA’s config file directly to update the current integration’s serial path and baudrate)
    2b) Rename/Move the zigbee.db file (should not be needed. If this is done then the restore will not remember entity names)
  3. Restart Home Assistant
    3a) Add the ZHA Integration to Home Assistant (needed if you removed it)
  4. Restore using the znp_restore command (If you used a custom file name for the backup then make sure you name it back to nwk_backup.json)
  5. Check the logs (currently the pre_shutdown call failed for the first successful test, but that is not critical)
  6. Restart HA
  7. Check that everything is ok

NOTES:

  1. Devices may take a while to rejoin the network as the Zigbee specification requires them to “back-off” in case of communication problems
  2. You may speed up the process by power cycling devices
  3. Devices may not be instantly responsive due to the zigbee mesh needing to be recreated
1 Like

You should definitely add this automation as an example on the git page so other users can use it as a template!

I did something better. I created a blueprint to add the backup automation: Open your Home Assistant instance and show the Daily Backup Blueprint pre-filled. .

I updated the procedure in the readme (thank you for your adjustments) and mentionned the blueprint as well.

2 Likes

Note that the NVRAM restore did not succeed because the nvram_backup.json file did not exist (it’s created only with a nvram_backup service without custom_data parameter. You’ld need to copy the file to nvram_backup.json in your case.