Automation that causes undesired never ending server restart cycle

homeassistant.update_entity

Did you forget to turn off polling on the integration? If you’re trying to limit your polls, you have to disable polling and then use an automation to force updates. Otherwise it’ll do both.

Yeah, that question was an attempt at the Socratic method of analysis but I realized afterwards it didn’t read that way. :disappointed: :man_shrugging:t3:

1 Like

The integration does not poll. The documentation says you have to update it manually.

1 Like

What is responsible for the ‘forced update’ in those messages? The integration or your automation.

Me using Developer Tools → Services

service: homeassistant.update_entity
data: {}
target:
  entity_id: sensor.openuv_current_uv_index
1 Like

The integration does not poll. The documentation says you have to update it manually.

Correct. There is a single request for the information when Home Assistant restarts, but otherwise updates have to be triggered from an automation.

The fact that there is a request when Home Assistant restarts means that the quota gets depleted quickly when there’s a bootloop.

Further update, Integration is still enabled, and no reboots are happening. (automation is still disabled)

You said you got those messages after enabling the integration. It implies the integration hasn’t created them yet. The next obvious question is ‘Why not?’

I would expect that result because the integration isn’t being to asked to update anything while it’s in its ‘maxed out quota’ state.

Are the integration’s entities present in Developer Tools > States? If so, what are their states?

1 Like

Interesting findings , my “last_triggered” is an hour back, compared to the last_update

Which is the right time(when it was triggered), in my zone UI, windows OS

And btw, the template listen every minute ( but hopefully not execute/update sensor/automation more than ones every 10min
I wonder why it picks time from OS-Level

Sun integration
|Next rising |Date and time of the next sun rising (in UTC).|
|Next setting |Date and time of the next sun setting (in UTC).|

Gives me midnight at
20.11.2023_22.01.33_REC

off to bed ! :smile:

Atleast it’s “consistent” :neutral_face:
20.11.2023_22.34.07_REC

However, the SUN Integration seems to be updating around (now and then and inbetween)
Might be this which beat the shit out of any running calculations

.

My HA Community reply cooldown has now passed, so I can reply again.

I believe the answer to this is “because I’m over the quota.” I ran the service again manually after the quota reset and everything works as expected. No errors.

The Integration is being asked to do things. I’m just making the service call manually instead of it being automated within the automation.

Current status: Integration enabled, automation disabled, remaining quota is above 0, manual service updates happen as expected, no reboots.

I didn’t check this when the quota was exhausted, but they are showing now with expected values.

Well anyways, if you plan on using this then you should really re-consider the redesign but with an additional counter.

Make a counter that counts the restarts and api calls.

alias: Set Schedule for OpenUV
trigger:
- platfrom: time
  at: "00:00:00"
- platform: homeassistant
  event: start
action:
- if: "{{ trigger.platform == 'time' }}"
  then:
  - service: counter.reset
    target:
      entity_id: counter.openuv_api_calls
  else:
  - service: counter.increment
    target:
      entity_id: counter.openuv_api_calls
- service: input_select.set_options
  target:
    entity_id: input_select.update_openuv
  data:
    options: >
      {# change updates for the number of updates between sunrise and sunset #}
      {% set updates = 30 %}
      {# change include_senset to True to include the sunset as the last trigger time #}
      {% set include_sunset = False %}
      {% set setting = (state_attr("sun.sun", "next_setting") | as_datetime | as_local).replace(day=now().day) %}
      {% set rising = (state_attr("sun.sun", "next_rising") | as_datetime | as_local).replace(day=now().day) %}
      {% set ns = namespace(items=[]) %}
      {% set inc = (setting - rising) / (updates - 1 if include_sunset else updates) %}
      {% for i in range(updates) %}
        {% set ns.items = ns.items + [ (rising + i * inc).strftime("%H:%M") ] %}
      {% endfor %}
      {{ ns.items }}
alias: Update OpenUV
description: ""
trigger:
  - platform: template
    minutes: "{{ now().strftime('%H:%M') in state_attr('input_select.update_openuv', 'options') }}"
action:
  - if: "{{ states('counter.openuv_api_calls') | int < 50 }}"
    then:
    - service: homeassistant.update_entity
      target:
        entity_id: sensor.openuv_current_uv_index
    - service: counter.increment
      target:
        entity_id: counter.openuv_api_calls

mode: single

Then you’ll never hit the issue.

Your answers support my hypothesis that the cause of the problem isn’t anything within the automation. The automation itself is as vanilla as it gets. It’s when it repeatedly prods the integration to poll for data, during an ‘over-quota’, is when things go sideways. The integration appears to be responsible for loading Home Assistant to the point of unresponsiveness.

It would be interesting to see memory consumption and CPU load when the integration starts to misbehave.

1 Like

Agreed. I believe the integration is getting into a race after mishandling/misunderstanding the over quota message.

Defensive code to prevent multiple attempts like Petro points out should help but ultimately the fix will probably be something in the integration.

Any indication of that being reported in the issues for the openuv integration? If not there should be :wink:

Something in the source lib. Also, it’s peTro. :slight_smile:

1 Like

I know - sorry autocorrect. I do most of these on my phone… I was mid correction - you’re too fast man. :slight_smile:

1 Like

already addressed in core

target is apparently next release, not sure if it will make it.

Not exactly what you’re looking for but apparently it adds an adjustable window for polling. So you can not care about the issue listed above. Just tie it into sunrise/sunset.

1 Like

By my reading of the automation I can’t see how the quota should be used up before sunrise since the conditions do not allow the entity to be updated.

The original post shows the bootloop starting when there was usable quota, and well before sunrise.

The protection window is when extra protection is required from UV. The value is true when the index crosses a threshold and then false when the index returns below it. It currently uses real values, the change will use predictions.

The current recommendation is to poll for both the index and this boolean value, however if you know the index and the threshold then that entity is pointless.

The change in that ticket just removes ~50% of the polling, which my automation was ignoring anyways, and changes the entity from real values to approximate values.

What would be useful is knowing the predicted timestamp for those changes in the boolean value. Once the linked change is merged I will request for the timestamps to be exposed as additional entities, but knowing the actual index will still be the most important bit of information.

I wanted the discussion to stay away from “alternative automations” and instead focus on the fact that HA shouldn’t be bootlooping, especially with such a straightforward automation.

I realize I still need to provide logs in order to progress that side of things.

Still points to the integration, not the automation.

Change the automation’s action from prodding the integration with homeassistant.update_entity to simply posting a persistent notification with notify.persistent_notification. If the automation is truly at fault, it will cause the problem to reappear.

If it causes bootloops when there is and is not quota then me manually running update_entity many times within a minute should cause the issue, but does not.

I can’t send the server into the bootloop right now, but I’ll have a go when I’ve got time.

Thanks to all for the continued input on this.

You should be able to see a behavior in history, when the sensor is updated, vs automation triggered, vs boot_loop and rising, setting