2021.4: For our advanced users ❤️

I’ve been using that for automations, but from some quick experiments, it doesn’t seem to work with multiple filters…

test_script:
  description: Test script
  fields:
    field1:
      selector:
        text:
  variables:
    field1_fixed: "{{ field1 | default | lower }}"
    field1: "{{ field1 | default | lower }}"
  mode: parallel
  sequence:
    - service: system_log.write
      data:
        message: "field1: {{ field1 }}, field1_fixed: {{ field1_fixed }}"
        level: error

If the above script is called like this:

service: script.test_script
data:
  field1: Test

The log output will be:

field1: Test, field1_fixed: test

So one can see that it will still take the input field instead of the variable with the same name, but the new variable (the one with different name) works just fine!

What you are trying to do there is a bit different than the use case I presented. My suggestion will substitute a default value if you do not provide an “optional” field parameter to suppress the variable warning.

You are redefining an already defined a field value as a variable. This can be done just by using a different variable name as you have done with field1_fixed. It can also be done by redefining the variable in the sequence block instead of the variables block.

For what it’s worth, I would have expected it to work in the variables block also, but as you discovered it appears it doesn’t. Not sure if it’s a bug or intended behaviour.

Try this example with the variable declared in the variable block and then again in the sequence block both calling it with a value defined for field1 and without.

test_script:
  description: Test script
  fields:
    field1:
      description: ''
      example: ''
  # variables:
  #   field1: "{{ field1|default('DEFAULT')|lower }}"
  mode: parallel
  sequence:
    - variables:
        field1: "{{ field1|default('DEFAULT')|lower }}"
    - service: system_log.write
      data:
        message: "field1: {{ field1 }}"
        level: error
1 Like

We are in agreement here, I too am not sure if this is a bug or not, but as the default filter works, I would expect any other filter to work too… I think I’m going to open a bug and see what happens!

You can’t overwrite an incoming variable.

I.e. the incoming variable will always take precedence and you should create a new variable.

EDIT: To further clarify, your issue is this:

field1: "{{ field1
   ^            ^
   |--SAME NAME-|

not the use of multiple filters.

So what happens is, you’re getting your original input variable, not your altered variable.

Fair enough, I understand if that is the case, however I can’t find anything like that on the documentation - and I have tried to find something on it…

I’ve now open the issue on this matter, if it is now a bug it will at least be closed with a comment confirming that.

Right, but that’ll be shut down because your issue is not the filters and the use of the same variable.

Example: You provide the following variable to your script: foo: TEST and you have foo: "{{ foo | default | lower }}" in your variable section. No matter what, if foo is provided you’ll always get foo: TEST. and lower will never be applied because you cannot overwrite what’s incoming.

Still no one else with the excessive cpu usage and high temperature, since updating from 2021.4.5 to 2021.4.6?

As of now, and still without errors:

What sensors give you those charts?

Command line, for Cpu Temperature.

  - platform: command_line
    name: CPU Temperature
    command: "cat /sys/class/thermal/thermal_zone0/temp"
    unit_of_measurement: "°C"
    value_template: "{{ value | multiply(0.001) | round(1) }}"

And System monitor, for processor use.

  - platform: systemmonitor
    resources:
      - type: disk_use_percent
        arg: /home
      - type: memory_free
      - type: processor_use

Thanks. I learned something new today.

Did you get this resolved? I am getting the exact error prepping to go to 2021.5. I have installed python3.8. Running supervised.

Hi. Yes.
See this link, deleting this folder solved it for me.

I’ll be… It was the zigbee2mqtt integration. It was eating the CPU.

Moments after uninstalling it:

Its something peculiar to your setup I think.

Add-on CPU Usage 0 % Add-on memory usage 0.8%

I see you have a long value for “id” which I’ve never seen referenced in docs before. How is this determined? Can I just duplicate the “alias” field (which I already make unique)?

I just went to try the new debug feature and it says “only available with unique ID”…I’m guessing that’s this field? How do I make it so I can use the debugger?

It is a UUID generated by a VSCode plugin.

Yes.

1 Like

Is it possible to configure the history size for the trace feature? Currently trace feature is “only” available for last 5 events of an automation. Couldn´t find a doc section in case it can be configured.

Is this an option to work around this?

Would need it for integrations/platforms like alarm and device_tracker (maybe also uptime, filesize, systemmonitor, command_line, etc.). “id” or “unique_id” unfortunately seems to not be accepted.

you have to wait for an update to the integration that allows unique_id’s.

What do you think you need to work around. This is not an error.