Improving automation reliability

Yes, this will exactly address my use case, thanks!

In order to test it, I intend to set up a couple of scenarios where validation is guaranteed to fail.

Qq: can the trace of the automation show the retry logic, or do I need to see that on logs elsewhere?

Would this work for each entity in a group or its better to specify each?

Qq: can the trace of the automation show the retry logic, or do I need to see that on logs elsewhere?

The trace of the automation doesn’t show the retry logic. However, log entries are created. There is a log line for each retry with the reason of the failure.

Would this work for each entity in a group or its better to specify each?

Short answer: it works for each entity in the a group.

Longer version:
There are 2 types of groups:

  1. group component’s entities, e.g. group.main_floor_lights. This is refered as the “old style” groups in the documentation.
  2. group platform’s entities, e.g. light.main_floor_lights which can be created via the “group” helper UI (there is also a YAML version for it).

Any group of either type get expanded (recursively, if needed) and retry is called individually on each entity in the group (to isolate failures).

thanks, this should be integrated as a feature in HA is the best addon.

1 Like

Hi, I’ve used it to turn on and off a switch at night. It seems to work as the switch does turn on and off but I get “repairs” notifications:

## Service call failure
homeassistant.turn_on(entity_id=binary_sensor.salonnord_input) failed after 7 retries. Check the log for additional information.

My automation looks like this:

The automation trace looks alright:

But I get these “repairs” messages:

(sorry about the multiple messages, as a new user I’m only allowed 1 image per post…)

1 Like

Please open an issue here. The form will guide you on how to collect and upload the required information for debugging the issue. The main thing that is missing is the error from the log with the reason for the retries.

Thank you, I just created one: https://github.com/amitfin/retry/issues/58

The Tapo stuff in the log is just noise due to my camera being physically unplugged when I’m home most of the time.

edit: thanks for your help on the ticket !
For those encountering the same problem, in the target selection, you should select the switch entity only and not the device. I’m using a Shelly 1 and the device includes not only a switch but also some other entity.

Just wanted to share that this integration works great and is simply awesome. Thank you for you efforts!

1 Like

I’m either not clear on the validation option or I’m getting an error. I’m testing a switch (switch.test). The integration works great but if I add the validation option I get an error when attempting to check to see if switch.test is on to stop the retry attempts.

I’m using “[[ states(‘switch.test’) = “on” ]]” and I get an error: invalid template (TemplateSyntaxError: unexpected char ‘‘’ at 10) for dictionary value @ data[‘validation’].

It should be “==“ (double equal sign instead of a single one).
However, using the expected_state parameter is the simpler approach for this use case.

My apologies, I do have == and am getting that error but if the expected state is checked before each attempt, that will work. I understood that this was only checked before the first attempt and then the defined retries happened without using validation.

Added a paragraph in the documentation to make it clearer. Thanks!

Thank you for that clarification!

This integration saved my life, due to the wi-fi hickups which leads to unavailable switches, etc.
The only disturbing is the generated logs, in case a switch remain unavailable, after the end of the 7 retries.
I use only switch… entities in my retry.actions configuration but still got those logs.
Is it possible to avoid the log generation?

@jolas , happy to hear that you are finding the integration useful :grinning:
You can disable logging by using log filters:
Here is a simple configuration to disable all log entries from the integration. You can fine-tune it as needed.

logger:
  filters:
    custom_components.retry:
      - ".*"
1 Like

Thank you!

HA 2024.8 renamed service to action (post). It broke retry.actions in version v2.x. Please upgrade to retry integration v3.0.2 (or newer) once you upgrade to HA 2024.8.
Note that retry v3 is fully backward compatible with retry v2, so no additional changes are needed.