WTH - Why can't experienced users disable repairs?

Every time i reload my manual configured mqtt entities i get 5 repairs in settings and it takes 15 clicks to close them.

20220928_183638

All automations are configured like this one.

automation:
  # Flur alt und neu synchron
  - alias: switches_og_flur_thermostat
    trigger:
      - platform: state
        entity_id: switch.eth008_og_3 # OG Flur alt
    action:
      - service: switch.turn_{{ trigger.to_state.state }}
        entity_id: switch.eth008_og_6 # OG Flur neu

Please add the ability to disable repairs.
I know how to repair my system.

Thanks for reading.

automation:
  # Flur alt und neu synchron
  - alias: switches_og_flur_thermostat
    trigger:
      - platform: state
        entity_id: switch.eth008_og_3 # OG Flur alt
        not_to: ['unknown', 'unavailable']
    action:
      - service: switch.turn_{{ trigger.to_state.state }}
        entity_id: switch.eth008_og_6 # OG Flur neu

Pretty easy fix no? Then your automation would stop logging errors and repairs all the time.

2 Likes

Thanks, i know, but this is a feature request to disable things that some users don’t need.

But you are no longer the “target audience”. :wink:

Also, this:

Click on SUBMIT below to confirm you have fixed this automation.

seems a bit passive aggressive.

What if, as in your case, it works the way it is for you and you don’t want to “fix” it?

I agree we should be able to ignore repairs if we want instead of being hounded about them.

1 Like

Changed to WTH . :smiley:

1 Like

That is a valid repair, the example given raises errors and should be fixed.

Sounds like repairs are actually useful in this case :grimacing:

Repairs is core functionality that cannot be disabled, as all integrations should be able to rely on its existence (to be able to report). It is a similar think like all integration rely on the existing of devices or entities.

Repairs can be ignored when version bound (eg breaks in the future), but when causing issues, like in this very example, there is no good reason to ignore it.

We do monitor / do not allow for issues that cannot be fixed or otherwise ignored (it is not a notification / messaging platform).

If there are issue that are not fixable or incorrect, those are absolutely welcome and I would be happy to address those immediately :+1:

…/Frenck

2 Likes

I also get 8 repairs requiring 24 clicks to get rid of every time I start the system.

However, for me there is nothing to fix…

The automation “Chime - Freezer Door state” (automation.chime_freezer_door_state) has an action that calls an unknown service: siren.turn_on.

But this service does exist, and works fine. Each of the 8 automations mentioned work fine when triggered and when executed manually.

Clearly the repair is reporting things before the system has completed startup… In this case the system runs in docker containers, with the siren device handled by zwave-js-ui using the websocket interface rather than mqtt.

More annoying is that the repairs cannot be disabled. It wouldn’t be quite so bad if they could all be dismissed with a single click.

It’s also a relatively new “feature” as the code has be in place for years, and it’s only started happening in the last few months.

Whatever is creating the entity that is using that service is most likely not creating it properly.

This is either a bug in the integration creating the siren entity or a bug in the built in siren integration.

Presumably most of the startup runs in parallel, and that the automation gets triggered and calls the siren before it has been setup… Looking at the trace and logs, I see that the siren setup does indeed start 2 seconds after the automation triggered and gets the failure. I’m not sure how this would be classified as not being setup correctly or a bug… just sounds like a race condition during startup. Once started the system does exactly what it is expected.

Any suggestions how I can mitigate this? Is there a mechanism that I can use to make these automations wait until all integrations have finished initialising, or add a test to see if the service call is available? Or maybe prioritise the siren setup to be earlier in the startup?

Any mitigations are not required to make it work, they are just to stop the repair messages…

Thanks

This is a bug. HA should be fully started before anything triggers.

They do wait. What integration are you using that makes the siren?

It’s an aeotec siren-6 using the z-wave plugin configured to use the websocket interface to zwave-js-ui docker service

Sounds like the grounds for an issue on GitHub then.

Okay, I’ll head over there then. Presumably core is the most appropriate place…

Thanks for your thoughts on the matter!

/nick