Every time i reload my manual configured mqtt entities i get 5 repairs in settings and it takes 15 clicks to close them.
All automations are configured like this one.
automation:
# Flur alt und neu synchron
- alias: switches_og_flur_thermostat
trigger:
- platform: state
entity_id: switch.eth008_og_3 # OG Flur alt
action:
- service: switch.turn_{{ trigger.to_state.state }}
entity_id: switch.eth008_og_6 # OG Flur neu
Please add the ability to disable repairs.
I know how to repair my system.
automation:
# Flur alt und neu synchron
- alias: switches_og_flur_thermostat
trigger:
- platform: state
entity_id: switch.eth008_og_3 # OG Flur alt
not_to: ['unknown', 'unavailable']
action:
- service: switch.turn_{{ trigger.to_state.state }}
entity_id: switch.eth008_og_6 # OG Flur neu
Pretty easy fix no? Then your automation would stop logging errors and repairs all the time.
That is a valid repair, the example given raises errors and should be fixed.
Sounds like repairs are actually useful in this case
Repairs is core functionality that cannot be disabled, as all integrations should be able to rely on its existence (to be able to report). It is a similar think like all integration rely on the existing of devices or entities.
Repairs can be ignored when version bound (eg breaks in the future), but when causing issues, like in this very example, there is no good reason to ignore it.
We do monitor / do not allow for issues that cannot be fixed or otherwise ignored (it is not a notification / messaging platform).
If there are issue that are not fixable or incorrect, those are absolutely welcome and I would be happy to address those immediately
I also get 8 repairs requiring 24 clicks to get rid of every time I start the system.
However, for me there is nothing to fix…
The automation “Chime - Freezer Door state” (automation.chime_freezer_door_state) has an action that calls an unknown service: siren.turn_on.
But this service does exist, and works fine. Each of the 8 automations mentioned work fine when triggered and when executed manually.
Clearly the repair is reporting things before the system has completed startup… In this case the system runs in docker containers, with the siren device handled by zwave-js-ui using the websocket interface rather than mqtt.
More annoying is that the repairs cannot be disabled. It wouldn’t be quite so bad if they could all be dismissed with a single click.
It’s also a relatively new “feature” as the code has be in place for years, and it’s only started happening in the last few months.
Presumably most of the startup runs in parallel, and that the automation gets triggered and calls the siren before it has been setup… Looking at the trace and logs, I see that the siren setup does indeed start 2 seconds after the automation triggered and gets the failure. I’m not sure how this would be classified as not being setup correctly or a bug… just sounds like a race condition during startup. Once started the system does exactly what it is expected.
Any suggestions how I can mitigate this? Is there a mechanism that I can use to make these automations wait until all integrations have finished initialising, or add a test to see if the service call is available? Or maybe prioritise the siren setup to be earlier in the startup?
Any mitigations are not required to make it work, they are just to stop the repair messages…