"zha-toolkit" - a big set of Zigbee commands on top of ZHA/zigpy

Yes, I forgot about the binding - you need to bind the devices to the coordinator.
Also use zha-toolkit for that because the UI will not allow you to do it.

II did

service: zha_toolkit.bind_ieee
data:
  ieee: 50:0b:91:40:00:01:f1:e7
  command_data: 00:21:2e:ff:ff:05:cc:60
  cluster: 65281
  attribute: 84
  event_done: zha_done

but the cluster 0xff01, attribute 0x0054 does not appear in ```
zha_toolkit.conf_report_read

would it be better to unlink the device from ZHA and add it again?

If you want o check the configured bindings, you need to use zha_toolkit.binds_get .

Regarding the binding itself, only clusters are bound, not individjjual attributes.
zha_toolkit.bind_ieee does not use the “attribute” parameter. It does use the undocumented “cluster_id” parameter (not “cluster”) and when it is not find it only checks a list of internally “default” clusters.

So you need to write “cluster_id” not “cluster”. Sorry “cluster” is the correct parameter, “cluster_id” is used internally.

EDIT: You should also see the result of the binding by observing the event (‘zha_done’)

Here is the result for:

service: zha_toolkit.bind_ieee
data:
  ieee: 50:0b:91:40:00:01:f1:e7
  command_data: 00:21:2e:ff:ff:05:cc:60
  cluster: 65281
  event_done: zha_done

But with service: zha_toolkit.binds.get the cluster 65281 is not bound to the coordinator.
could it be because 65281 is a manufacturer cluster ?

The Bind_req command does not have a manufacturer parameter: zigpy/types.py at 297a2b0f364fe7a31de3b4966d70d1a67d432358 · zigpy/zigpy · GitHub .

That also matches the Zigbee specification.

The ‘result’ field is empty which suggests that no binding request was sent.

This seems to call for debugging. Hopefully the debug output provides enough information.

You can enable the debug level by calling this service:

service: logger.set_level
data:
  custom_components.zha_toolkit: debug

zha_toolkit.scan_device can also help understand better what the structure is.

The question is why the code does not find the cluster in one of the ‘ep.out_clusters’ or ‘ep.in_clusters’. Their contents is not logged in bind_ieee, but scan_device loops over all thos values and reports "Scanning input cluster … " and “Scanning output cluster …” in the log when debugging enable which allows to verify it’s prensense there, and normally also in the JSON output ( /config/scans ).

config/scans for my switch give this information

"0xff01": {
          "cluster_id": "0xff01",
          "title": "Sinopé Technologies Manufacturer specific",
          "name": "sinope_manufacturer_specific",
          "attributes": {},
          "commands_received": {
            "0x01": {
              "command_id": "0x01",
              "command_name": "1",
              "command_arguments": "not_in_zcl"
            },
            "0x0f": {
              "command_id": "0x0f",
              "command_name": "15",
              "command_arguments": "not_in_zcl"
            }
          },
          "commands_generated": {
            "0x00": {
              "command_id": "0x00",
              "command_name": "0",
              "command_args": "not_in_zcl"
            },
            "0x01": {
              "command_id": "0x01",
              "command_name": "1",
              "command_args": "not_in_zcl"
            }
          }
        }
      },

in the log I have this for zha_toolkit.bind_ieee
[custom_components.zha_toolkit.binds] 0xe9e7: got the [] endpoints for 65281 cluster [custom_components.zha_toolkit.binds] 0xe9e7: skipping ff0104X cluster as non present
[custom_components.zha_toolkit.binds] 0xe9e7: got the [1] endpoints for 65281 cluster

I pre-release v0.8.31 - it should fix this (but I did not test).

The cluster was not bound because the coordinator does not have a matching cluster - something that we can expect on device-to-device bindings but is not needed on the coordinator.

I also updated the messages to indicate in/out cluster, and success is now set to false if nothing was bound.

Thank you @le_top I’ve tested and it work perfectly for single_click_on, single_click_off, double _click_on, double _click_off, long_press_on and long_press_off. I’ll be able to find out all manufacturer cluster attribute that have reporting to improve my quirck for sinopé device.

1 Like

Just updated to HA 2023.2.0 and zha_toolkit is no longer recognized as a valid service.

Screen Shot 2023-02-03 at 15.38.40

I just updated a system to 2023.2.1 to check this - I do not have an issue.

Check your logs to see if there is any message regarding zha_toolkit indicating some issue.

There can be several causes. For example: a user had two integrations managing the USB stick or once a zigpy/zha itself did not start (the user did a reboot and the system worked).

Upgrading to 2023.2.1 fixed it: the zha_toolkit services are now available again in the “Developers Tools” → “Service” menu.

I reliably do not get the zha_toolkit services. it appears despite the dependency in the manifest.json, that zha_toolkit is loading before ZHA:

2023-02-06 13:48:00.541 DEBUG (MainThread) [custom_components.zha_toolkit] hass.data['zha']['zha_gateway'] missing, not initializing zha_toolkit - zha_toolkit needs zha (not deconz, not zigbee2mqtt).

@tube0013 Can you see in the log that zha_toolkit is loadling before zha?
What version of HA are you using?

EDIT: I released v0.8.33 - hopefully that will allow zha-toolkit to be setup - it no longer expects “the gateway” to be defined in ZHA to do its own initialisation. Hopefully that fixes it. If not, zha-toolkit will have to be setup not testing anything about zha at startup at all and report that zha is missing later (on every zha-toolkit service call).

Hey.

Ive created a sensor with attr_read/allow_create: true, but how do i remove that sensor?

  - service: zha_toolkit.execute
    data:
      command: attr_read
      ieee: 1f:ff:00:00:00:00:00:00
      cluster: 0x0000
      attribute: 0x0000
      state_id: sensor.new_sensor
      state_value_template: value / 100
      allow_create: true

So on Link to Entities – My Home Assistant you can tick the checkbox at the start of the line for an entity. A menu will appear at the top where the “red button” allows you to delete the selected entities.

I just had a chance to update, and seems to be reliably loading now. thanks!

Hi,
Lately my zigbee coordinator goes offline sometimes.
I tried setting up an automation to warn me about that using the device_offline trigger, but that doesn’t seem to help. can zha_toolkit help here?

An indirect way of checking if you coordinator is offline, is likely to check the last_seen value of one or more devices that should always be online.

I gave an example how to access this data here:

ant there is also an example in the zha_toolkit repository:

That could be adjusted to find the most recent last_seen value, update a state and then use that state to generate an alert if the date is too old.

Hello @le_top. Thank you for your great work. Is it possible, to convert the zigbee2mqtt room-loadbalancing-blueprint fron napalm to zha? I use danfoss ally thermostats at my whole house (actually 11, but later 15) but sometimes grouped heaters are uneven hot. With z2m and room loadbalancing, they get even heat. But i have a lot of issues with z2m and would like to go back to zha.

Zigbee2MQTT - Danfoss Ally TRV Room Load Balancing - Blueprints Exchange - Home Assistant Community (home-assistant.io)

It is possible to implement this under ZHA, but there is some work to transform this, mainly to work with the load values.

I think of these 3 ways to do it (I can think of more, but these would be the ones I consider):

  1. Work purely with a yaml automation (/blueprint) and zha-toolkit features to read the load values;
  2. Extend the existing danfoss quirk/zha_device_handlers (+ZHA?) to add the load values as a state attribute. Then use this attribute in a yaml script.
  3. Implement part of the code in Python, and mix with a yaml automation. It is easy and dynamic to add a new service to ZHA-Toolkit. There could be a set of danfoss specific services, and we would need one that accepts a list of devices that need load balancing, and the python code would take care of getting the current load indicators, compute the totals and send them to the devices.
    It would then suffice to call this service on a regular basis.

1. Using Yaml and ZHA-Toolkit features.

a. One should verify/configure the Load reporting configuration of the Danfoss Valves. This is a one time configuration that could be added to the existing Danfoss configuration script.
A correct configuration ensures that the valves are reporting their load without a need to send a zigbee attribute read request to the valves.
b. The load values would then end up in the cached values in the zigpy/zha database. They can be read using a service call like this (the example is for reading a cached temperature value). The key elements in this example are (1) use_cache which avoids sending a read request to the valve, and (2) writing the result directly to a state value or state attribute which is then useable in the automation.

  - alias: Try to get more precise temperature (should work if zigbee temperature
      sensor)
    service: zha_toolkit.attr_read
    data:
      ieee: '{{ temp_sensor_id }}'
      use_cache: true
      cluster: 1026
      attribute: 0
      state_id: '{{ temp_sensor_id }}'
      state_attr: best_val
      state_value_template: value/100

c. Compute the total load value - use a repeat loop in yaml and use the attr_read service as shown in (b) to get the load value, and compute the required sum. After the sum, loop again and use attr_write to send this value to the valves.

2. Update the zha_quirk

Implies starting from the existing quirk and make necessary changes to make the load value accessible from Home Assistant. I know that ZHA has evolve recently to make some of that easier to implement.

3. Implement part of the code in Python

One could start by adding a user service as shown in the _user.py example and documented here.

Zha-toolkit reloads the code before calling it so one can evolve and test the code without restarting HA.

The command_data argument/field can be used to provide the list of devices in one room. The python code would then loop over this list, either use the ‘attr_read’ service by calling it internally with the proper arguments, or implement the reads more explicitly and check the cache using code as in zcl_attr.py.

1 Like