Glad you figured out the solution. Yes, service parameters generally have native python types, not strings. So you should specify integers directly (eg: 20
) instead of strings (eg "20"
). But I’m surprised that True
didn’t work since it should be the same as bool(1)
.
oh no… True works too now… Don’t know what happend… Tried every “true” with double quotes, single quotes, not quoted,… but apparantly not True …
I wasn’t sure how to define the “data” part, so, that took me also some time …
so, bool(1) as well as True works…
another question: what about documenting ?
for python_script, you can create a services.yaml
is there something like that for pyscript too?
I’ve seen, in developer tools, services, it picks up the
“”“test notify hallo.”"" as description…
Yes - the doc string (quoted string right after the function definition) is used to document the service. You can put yaml in that description just like you would put in the services.yaml file. See the docs.
perfect! I didn’t see that…
And it’s working great.
Is my assumption correct that you cannot trigger a pyscript function with a webhook trigger?
If that is the case, where would be the best place to post a feature request?
Regards
Mike
You can if you decorate your function with the @service
attribute. Then you can call it with a service call: pyscript.[your_function_name]
in a webhook.
Yes - I had thought of that. But I wanted to be able to handle parameters if possible and I couldn’t see how to do that if I just called the pyscript service. I’ll investigate more!
Is there a neat way to convert this automation trigger into a pyscript decorator?
trigger:
- platform: numeric_state
entity_id: sensor.xxxxxxxx
below: "0"
for: 00:00:30
I can only think in implementing a task.sleep()
inside a @state_trigger('sensor.xxxxxxxx"')
triggered function when the condition below
is met. But maybe there is a cleaner way to implement it using decorators.
the ‘state_hold’ argument to ‘state_trigger’ will handle this.
Hi again,
I want to create a race condition with a state trigger and I don’t care which function gets there first but I don’t want that all functions evaluate at the same time, I want to finish the first function before evaluating the second wich was triggered at the same time.
Maybe a bit of code helps to illustrate the situation:
@state_trigger("counter.1 or counter.2")
def eco_1(trigger_type=None, var_name=None, value=None):
log.info(f"Eco 1")
if int(counter.1) > int(counter.2):
counter.decrement(entity_id=f"counter.eco_devices_active")
log.info(f"Eco 1 Triggered")
@state_trigger("counter.1 or counter.2")
def eco_2(trigger_type=None, var_name=None, value=None):
log.info(f"Eco 2")
if int(counter.2) > int(counter.1):
counter.decrement(entity_id=f"counter.2")
log.info(f"Eco 2 Triggered")
So, I don’t mind eco_1()
or eco_2()
executing first but I want them to execute one after the other as the behaviour of the one depends on the other.
As you discovered, the two functions will execute at the same time (more accurately, execution of both tasks will be interleaved in some manner). The easiest solution is to combine the functions so you have a single trigger and single function - within that function the execution order is of course guaranteed.
If your design requires two different functions as per your example, then you could use an asyncio.Lock
to enforce sequential execution. Create a lock outside the functions (so that happens once when the script is loaded):
import asyncio
ecoLock = asyncio.Lock()
Then at the start and end of each function do ecoLock.acquire()
and ecoLock.release()
. It’s important that you do the release prior to any return
, or else the other function will never acquire the lock.
Thank @craigb for the thorough explanation.
The idea was to have an app that was able to manage other apps when the other apps were in an auto
state, so I need to have multiple triggers (one for each app).
Your solution could work (and is a nice thing to know), but I quickly discovered that I needed to create also a global_context
and make all the apps share it. That can lead to unwanted behaviour due to variable duplication so the solution I have opted for is to simply delay the execution of the functions with state_hold
.
@state_trigger("counter.1or counter.2",state_hold = 1)
def eco_1(trigger_type=None, var_name=None, value=None):
pass
@state_trigger("counter.1or counter.2", state_hold = 2)
def eco_2(trigger_type=None, var_name=None, value=None):
pass
It’s not great but it des the job sequencing the triggers.
Hi again,
I am having trouble here with pyscript and context now.
Basically, I want to be able to create a variable within an app (global for the app) and then use a function triggered by a decorator to modify that variable. It looks like that the modifications within the functions do not apply to the global variable. Maybe a code example helps to illustrate my question:
myVariable = ''
@state_trigger("sensor.a")
def fun_1(trigger_type=None, var_name=None, value=None, myVariable=myVariable)
myVariable = 'a'
@state_trigger("sensor.b")
def fun_2(trigger_type=None, var_name=None, value=None, myVariable=myVariable)
myVariable = 'b'
But unfortunately myVariable
is always an empty string, so is there a way to make it return the variable or create the variable global to the app without having to create a HA variable instance?
Regards
Python doesn’t allows functions to modify global variables unless you explicitly declare them as global. So you should add global myVariable
inside each function. Without that, the global variable is available if you read it inside the function, but if you assign it then only a local variable will be created.
Hi again,
Is there a way to handle exceptions when using decorator-triggered functions, for example when a sensor becomes unavailable? The script keeps running but it just pollutes the log files with unhandled exceptions.
2021-09-17 18:27:03 ERROR (MainThread) [custom_components.pyscript.apps.app1.myfunct] Exception in <apps.app1.myfunct @state_trigger()> line 1:
float(sensor.1) > numer_var
^
ValueError: could not convert string to float: 'unknown'
Thank you for your help!
There isn’t a way to catch exceptions in decorator expressions. However, an undefined state variable in a decorator function evaluates to None
if not defined, rather than throwing an exception. So you could test to see if a state variable is not None
before trying to use it.
For example:
@state_trigger("float(pyscript.test2) == 1 or float(pyscript.test2) == 2")
def func():
print("function called")
will fail when you first set either variable, since the other is not defined.
Checking for None
makes it robust to undefined state variables:
@state_trigger("pyscript.test1 is not None and float(pyscript.test2) == 1 or pyscript.test2 is not None and float(pyscript.test2) == 2")
def func():
print("function called")
This assumes when the sensor isn’t present that the corresponding state variables are deleted. From the error message, it actually seems they exist and evaluate to the string unknown
. So I’m guessing you could just check for that before the float:
sensor.1 != "unknown" and float(sensor.1) > numer_var
This works since python logical operators short-circuit - the 2nd argument of and
is not evaluated if the first is False
.
Just thinking back about my comments, does the decorator register the triggers or are the functions at the root level used? I don’t use pyscript and I just realized this thread was about pyscript and not normal python.
Yes, it differs from regular python in that the trigger decorator runs a new async task that continues independently of the try/except
wrapper, which completes as soon as the function is defined.
Thanks, I’ll remove my comments