0.113: Automations & Scripts, and even more performance!

Check your entity_registry_updated count… This is mine… on 0.114 (dev version) on my production system:

image

1 Like

Tom, you are 5,000 miles from home and you upgraded to a. 0 release remotely.
You are either the bravest man I know, have immense faith in this release or you are plain stupid … I’m still deciding, :thinking:

:rofl:

4 Likes

Maybe he is single. I’m always afraid to upgrade mostly because I don’t want to reduce the my “wife approval factor” :stuck_out_tongue:

2 Likes

My CPU is going to have 0% load if you guys keep this up.

nick convinced me to stop being a scaredy-cat. Plus rolling back from the CLI is easy enough.

I very nearly jumped on the beta release this last round.

Never believe anything @nickrout says, he’s a lawyer.
:rofl:

5 Likes

Did anyone else’s Roomba randomly run last night?

@konnectedvn

Yep, for either parallel or queued mode, if that many “runs” have already been “started” (i.e., that many running in parallel, or that many running & queued up to run), any new invocation will instead cause a warning.

I have had a request for queued mode to implement a FIFO mechanism. I.e., once the max are running and queued up, if another comes, cancel the first in the queue and add a new one to the end of the queue. So, e.g., if you had an automation in queued mode with a max of 2, then once it’s running, and while it’s still running, any new trigger will push out whatever is in the queue and replace it, so effectively you get: run the first trigger, then run the last trigger that happens while the first one is still running, and no warnings.

It’s on my list to consider. No promises, though! :wink:

1 Like

No, that’s a bug (issue #38117 – fix submitted: PR #38124) that I’m already working on fixing. I guess I’ll try to get that in an upcoming point release.

Basically the repeat variable isn’t created until it gets into the sequence part, which is fine for repeat loops that use count or until, but isn’t for while.

As a work around for now you could do:

xiaomi_alarm_seq:  
  sequence:
  - condition: state
    entity_id: input_boolean.activate_alarm
    state: 'on'
  - repeat:
      sequence:
        - service: light.turn_on
          data:
            entity_id: light.gateway_light_04cf8c8f8ee4
            color_name: 'red'
            brightness: 255
        - delay: '00:00:01'
        - service: light.turn_off
          entity_id: light.gateway_light_04cf8c8f8ee4 
        - delay: '00:00:01' 
      until:
        - condition: template
          value_template: >
            {{ is_state('input_boolean.activate_alarm', 'off') or
               repeat.index == 10 }}

EDIT:

Another, even simpler, way to do it is like this:

xiaomi_alarm_seq:  
  sequence:
    repeat:
      count: 10
      sequence:
        - condition: state
          entity_id: input_boolean.activate_alarm
          state: 'on'
        - service: light.turn_on
          data:
            entity_id: light.gateway_light_04cf8c8f8ee4
            color_name: 'red'
            brightness: 255
        - delay: '00:00:01'
        - service: light.turn_off
          entity_id: light.gateway_light_04cf8c8f8ee4 
        - delay: '00:00:01' 
2 Likes

Sorry, that’s not clear.
FIFO to me means (effectively) a restart on x number of instances. Ie max 3 so run Auto A (1, first instance), run Auto A again (2), run Auto A again (3, reached set max), run Auto A again (4). This should cancel (1, the oldest), move (2) to (1), move (3) to (2) and then (4) becomes the new (3) ?

Same here for my T6 - I can finally reliably initiate temperature changes from HA :+1:
Before it would only react reliably to mode changes and the occasional (10%) of the temperature change command.

But the whole point to queued mode in the first place is to let a run finish once it’s started. So the queue actually comes after the current run. I.e., only pending runs are queued. So you replace the oldest run in the queue, not the oldest of all runs, which is the one actually running.

And, yes, before you say it, max doesn’t directly specify the queue size; the queue size = max - 1. That’s why it’s not called queue_size. :wink:

1 Like

So if we had max 10 and each run takes (say) 20 mins and we hit a situation where the automation is triggered every 1 minute.
Then it will work out that we just keep cancelling the 2nd place item ?
But surely if we have an automation that we think needs to run multiple instances (sequentially, because this is queue) then that’s because we ‘may’ be acting on different information/status’s and therefore, newer == better ?

I stopped HA, deleted ‘home_assistant.db2’ and than updated to 0.113 via SSL … with success. Even I already read about missing free space you kick me up to try it. Thank you.

Depends on the particular situation. If the queue isn’t big enough, then it depends on what matters. If newer is more important than older, then use FIFO, which drops the oldest in favor of the newest. If older is just as important as newer, well, then you’ve misconfigured your system, or you’re trying to do the impossible.

Hey Phil,

testing the script integration before didnt throw these warnings, but how can I get rid of this in the logs now we are in core? it really is spamming a lot:

to be clear, these are automations with simple actions, no scripts called in the action block. Some of these are triggered frequently, but still should do so without warning. Any specific setting I need to set in the automation itself?

Thanks if you can have a moment to help me.

Great Release Guys! Love that my CPUs idles now alot more. :slight_smile:.

I only get these mdi warnings which I have no idea to fix them because they are from HA Extensions. I only have them 46 times :slight_smile: These are the messages:

2020-07-23 00:08:44 WARNING (MainThread) [frontend.js.latest.202007160] Icon mdi:visual-studio-code was renamed to mdi:microsoft-visual-studio-code, please change your config, it will be removed in version 0.115.
2020-07-23 00:08:44 WARNING (MainThread) [frontend.js.latest.202007160] Icon mdi:mixcloud was removed from MDI, please replace this icon with an other icon in your config, it will be removed in version 0.115.

EDIT: @frenck I now have over 140 of these messages. Is this a bug? Can someone make it so that it only shows it once.

1 Like

See issue.

Friends, I can’t find where to fix the error below, started after I upgraded to 113, what can it be?

Logger: homeassistant
Source: runner.py:101
First occurred: 10:26:51 (5 occurrences)
Last logged: 10:38:22

Error doing job: Unclosed client session

yes, I did see that, but that would be for automations that are actually running. The ones I posted here aren’t running at the moment they are triggered again. And even if they were, they would be with reason. Guess after having tested the scripts extensively, I now will need to see to all automations and add a mode: restart to the triggers https://www.home-assistant.io/docs/automation/#example-setting-automation-mode … missed that tbh :blush:

sorry to bother.

Yes, they are, otherwise you wouldn’t be seeing those warnings.

Well that depends if restart mode is appropriate, and that is on a case-by-case basis. queued or parallel might also be appropriate in some cases. Or maybe single is correct, and you’ll just have to live with the warnings for now. :smile: