How to create a function shared between different automations?

Hi,

I have the following action in several automations:

data_template:
  entity_id: input_number.*room_name*_target_temperature
  value: >-
     {{ a sequence which calculate the time needed to get the target temperature at a defined hour according different conditions }} 
service: input_number.set_value

Most part of the sequence could be a function with input parameters and a return value so I would like to create it in HA in order to share this part of sequence between different automations (one for each room). How to do it ?

You could create a script that takes the input_number as a parameter.

Is a script able to return a value ? Because I don’t want the script set by itself the input_number value since there are others conditions outside the scope of the script to calculate this input_number value. The value returned by the script would be an intermediate value for the whole calculation.

No it will not return a value. If you are familiar with Python, I recommend taking a look at AppDaemon, there you can do this “easily” and much more.

1 Like

Thank for the answer and the suggestion but AppDaemon is too much for me to just share a template code.
What about “macro” ? Are they able to return a value ? I made some research about them but it is still unclear for me if it is a solution for my problem and if so, how to do.

You could write a script that “returns” a value. You would pass in the entity_id of an input_number (along with any other parameters it needs), which the script would use to write the result into. Then you call that script, and when it is done, the “result” will be in the specified input_number.

service: script.my_script
data_template:
  entity_id: input_number.WHATEVER
  params: WHATEVER
my_script:
  sequence:
  - service: input_number.set_value
    data_template:
      entity_id: "{{ entity_id }}"
      value: >
        {{ blah, blah, blah, using params }}
1 Like

This is probably the best way, except I would not use a script, but an automation, and trigger it by using an event.
The problem with shared scripts like this is that only one instance of the script will run at any time. So if you happen to trigger the script again before the first instance is complete, the second one will not run.
For a very simple script like this, the chance for that to happen might be minimal, but from experience I know that you soon start to add complexity to the scripts, and they start taking more time to execute, and you end up with collisions. And also it will probably happen often if you try to adjust the temperature in multiple rooms at the same time.

So instead, create an automation:

- id: set_room_temp
  trigger: 
    - platform: event
      event_type: set_room_temp
  action:
    - service: input_number.set_value
      data_template:
        entity_id: "input_number.{{ trigger.event.data.room_name }}"
        value: "{{ trigger.event.data.room_temp }}"

Then you trigger it from your other automations by something like

- id: set_room_temp_living_room
  trigger:
    (...)
  action:
    - event: set_room_temp
      event_data_template:
        room_name: living_room
        room_temp: "20"

And then you can finally create a third automation that triggers if the “Input_number”-value changes, and actually adjusts the heater og thermostat.

- id: room_temp_living_room_changed
  trigger:
    - platform: state
      entity_id: input_number.living_room
  action:
    - service: climate.set_temperature
      data_template:
        entity_id: climate.living_room_thermostat
        temperature: {{ input_number.living_room }}

Now you have an easy way to adjust the set-temp for each room, And whenever you change the input_number - either from the web-ui or from a script or automation or an API-call or by any other mean, the last automation will run, and set the wanted temperature.

2 Likes

It is exactly what I wanted !

As an Home Assistant beginner I was not aware of the way to give “parameters” from an automation to another automation (here with event_data_template).

Thanks all for the help.

I was originally going to mention that in my reply but decided not to, for the simple reason that, it can’t happen in this scenario. As long as 1) the script doesn’t contain a delay or wait_template (that actually waits), and 2) the overall time to run the script is not more than 10 seconds, the script is run completely “synchronously”, so it can’t be run a second time when it’s already running. Given the use case I highly doubted either of those requirements would not have been satisfied.

But, yes, in the general case (where a script might have a delay or wait_template), one needs to be aware that it can’t be run more than once simultaneously. For now…

I’m actually in the process of fixing the bizarre automation behavior (where if it triggers while it’s still “in” a delay or wait_template from a previous trigger event the current step is aborted and it continues with the next step of the action sequence), as well as enhancing the script integration to allow and properly implement simultaneous, parallel runs. So, hopefully in the not too distant future… :smiley:

Regarding your suggested implementation, the templates in the first automation don’t seem quite right. I think you’d need to use trigger.event.data.room_name and trigger.event.data.room_temp instead of just room_name and room_temp.

You are most certainly right about the “trigger.event.data…”. Fixed my other post to be more correct.

Simultaneous parallel running of scripts would really be great. Thanks for that!

And I agree that a simple script that just sets the value of an input_number would probably never collide with itself, but I have experienced myself that even pretty trivial scripts might collide with them selves if they are executed from different automations that just happens to run at the same time.
Also, even without any “delay” or “wait”, the scripts might have steps that don’t return immediately, like rpc calls or certain notifications etc, which greatly increases the risk of collisions.

True. Every service call is “blocking.” However, that doesn’t mean that the task started by the service call completes “during” the service call. So, even though “technically” what I said is still true, in that scripts that meet the criteria I listed (and the services, etc. they call) are all run synchronously, indirectly they can start asynchronous tasks.

So, yeah! :stuck_out_tongue_winking_eye:

So if you happen to trigger the script again before the first instance is complete, the second one will not run.

It will not run at all or only when the first instance is complete ?

It will not run at all. You will get an error in your logs “Script already running”, and the second instance is never started.
Also - there is a problem if you try to “reload scripts” while a script is running. Then the running script will not be reloaded, and in some instances, it will not complete either, so it will be hanging, and you need to restart HA to get the script back on track.
So any script with a timer or delay should be explicitly stopped before doing a “reload scripts”, or before you try to start a new instance of it.

Your solution is nice (and I will adopt it) but I think a “function” (in a classic sense, with a return_value) would be a simpler solution in some use cases. I suppose that is possible through AppDaemon but it is complicated to set up (at least in a pyvenv environment) and that need to know Python. I would like a function_template (of course it would need to be multi-thread to accept simultaneous call).

Interesting. I never try to use any of the UI based reload capabilities (except for python_script’s) because they’ve never really worked correctly when using config packages. I gave up on them long ago and always just restart HA whenever I change anything.

Anyway, the code definitely seems like it attempts to stop all the scripts before reloading them, so in theory this shouldn’t happen. But the script code is riddled with bugs, so I’m not completely surprised. There is a reload test, but it doesn’t really test this scenario.

Did you open an issue, or are you aware of any open issues relating to this?

Hopefully the new behavior I’m implementing will be much more robust and won’t have this problem. Thanks for mentioning it. I’ll keep it in mind.

I also miss a proper “function” - and not least a better “if … else if … else”.
Conditions will stop the script or automation on the first one that returns false, so there is no easy way to create one automation that does different things based on multiple premises.

Regarding the “script reload” issue, there is a rather old bug report here:
https://github.com/home-assistant/home-assistant/issues/25419

Thanks. I’ll give that a read through. I’ll keep this in mind as I work on the automation/scripting integrations.

FYI, I was able to reproduce the problem. See my comment in issue 25419.

I stepped on a very similar thing.

I have several automated shades that work with a rest api.
I use the cover.template and have a template scripts to open / close / stop where each script takes a parameter for the shade number. But I have found that I need to call the API commands a few times to be 100% reliable, so the script calls rest_command, delays, calls rest_command, delays…

Then I write an automation to open two shades and I end up with the script called twice, albeit with different arguments. I get the dreaded script already running message and the second and subsequent calls are aborted.

It seems I have to roll each script out into 5 “almost duplicate” scripts, one per shade.
It would be nice if HA “uniquified” a script with the arguments when it called it.

It will. I’ve been putting in a lot of effort to get it that way. So, e.g., you’ll be able to do this:

automation:
- trigger:
    WHATEVER
  action:
    mode: parallel
    sequence:
    - service: BLAH
      data_template:
        SOMETHING: "{{ ... trigger.entity_id ... }}"
    - ...

In this case the action sequence steps will run completely independently for each trigger that fires, with a unique value of the trigger variable for each “run.”

An automation’s action sequence is really just a script. There will be several new “modes” (defaulting to legacy so that the zillions of automations & scripts that already exist won’t break overnight), including parallel, restart, queue, etc. The same thing will be available in scripts. E.g.:

script:
  abc:
    mode: queue
    sequence:
    - ...

This will cause a second invocation to be queued if the first invocation is still running such that the second run will begin when the first run completes.

I have much of the work done and hope for these new features to make a release in the not too distant future.

1 Like