Pyscript - new integration for easy and powerful Python scripting

If you turn on pyscript debug you should see messages about which variables are being monitored.

What are the sensor names, and are you sure those are entities (state variables) that are being set to "on"? If you are developing using Jupyter, just try interactively setting a sensor variable, or you could do it in the script file too. For example, add this test loop at the end:

for sensor in pyscript.config["apps"]["test"]["sensors"]:
    log.info(f"setting {sensor} to 'on'")
    state.set(sensor, "on")

If you are still having problems Iā€™d recommend stripping the test case down to a single hard-coded sensor trigger. Once that works, then add back more of your code that builds them dynamically.

Looks like pyscript.reload does not work for me, but restart of Hass does. Do you want to investigate?

Iā€™d like to have a pyscript service, which sleeps for a certain period of time during the service. Iā€™m using task.sleep for sleeping. Besides sleeping, Iā€™d like also to be able to cancel the service thatā€™s sleeping.

To achieve this Iā€™m having a task.unique(task_id, kill_me=False) call in the start of the method that eventually sleeps. I also have another service, which just calls task.cancel(task_id=task_id), task_id being the same as what is supplied to task.unique.

However, the cancellation doesnā€™t work: I cannot call task.cancel because pyscript doesnā€™t see it. The error I get is:

NameError: name 'task.cancel' is not defined

The same thing applies to most other task methods. Only unique, sleep, executor and wait_until seem to be available. Reading from the docs, I had assumed the task handle would contain all sorts of methods for task management (https://hacs-pyscript.readthedocs.io/en/latest/reference.html#task-management).

Is my approach sound, and if it is, any thoughts why many task methods listed in the ocs are not available? Do I need to do something to get get an access to those?

What do you see in the HASS log file? The pyscript.reload service will log a message for each file it reloads. If you turn on debugging youā€™ll see more messages about the steps itā€™s taking. If there are no log messages then for some reason the pyscript.reload service isnā€™t being called. Are you calling it interactively in Jupyter, or from the Developers Tools -> Services tab in the UI?

You can also add a log message at the top of your pyscript file to confirm that file is being re-executed on reload, eg:

log.info(f"Reloading {__name__}")

Other than that, weā€™ll need more details of the problem to help further - what exactly doesnā€™t work?

The new task functions are not yet released - they are available if you use the Github master version and will be in the next release. The ā€œstableā€ documentation for the current release 1.1.0 is here, while the ā€œlatestā€ documentation for the (unreleased) master version is here, which is what you were looking at. A summary of (most of) the changed in master (that will be in the next release) is here.

On your specific question, you can solve your issue just using task.unique(), and this will work in 1.1.0. It takes a string as argument, not a task_id. The string is any name you choose to identify that task. So in your service call you would do task.unique("my_running_service"). To cancel that service call you would simply make the same call - task.unique("my_running_service") - from another service or task.

1 Like

Hi guys. Looking for a way of informing me about updates. This look crucial especially now, when nabu.casa disables remote UI when your instance is outdated.
Iā€™ve found that there is an !binary_sensor.updater entity.

Should this work the same as the bundle from this thread?
Fond it here:
Send notification if new Home Assistant release - Home Assistant (home-assistant.io)

Hey everyone, Iā€™m sure this is probably just a dumb mistake on my part, but Iā€™m having trouble with task.executor(). It seems like it isnā€™t running the function passed along to it. I made a simple script just to test it after struggling with my actual project

@service
def tasktest():
    log.error("BEFORE")
    task.executor(my_func)
    log.error("AFTER")


def my_func():
    log.error("FUNCTION")

and this is the output:

2021-01-23 20:38:49 ERROR (MainThread) [custom_components.pyscript.file.test.tasktest] BEFORE
2021-01-23 20:38:49 ERROR (MainThread) [custom_components.pyscript.file.test.tasktest] AFTER 

Any thoughts?

The issue is that task.executor can only call a regular python function (which therefore canā€™t include any pyscript-specific features). All functions in pyscript are, by default, async (and not thread safe), so they canā€™t be used in places where python expects a regular function.

You should create the function you will call with task.executor in a module or package in a directory other than config/pyscript, so it is treated as regular python. See an example of how to do this in the docs.

In the master version in github there is a new function decorator @pyscript_compile which causes the function to be compiled as a regular python function, so that allows you to create native python functions selectively inside a pyscript file. So thatā€™s an alternative way to create a regular python function callable via task.executor. In either case, however, the native python functions cannot include any pyscript code.

The master code also has new functions for creating and managing tasks in pyscript, which do work with pyscript functions. So if you were using task.executor merely to create another asynchronous task (and not to move blocking operations like I/O out of the main event loop) then you could use one of these new functions, eg:

@service
def tasktest():
    log.error("BEFORE")
    task.create(my_func)
    log.error("AFTER")

def my_func():
    log.error("FUNCTION")

But these tasks still run in the main event loop, so it doesnā€™t solve the problem of avoiding blocking operations, which is what task.executor is designed to do.

I think I know what I was doing wrong. I was hitting reload on the integration instead of calling the service. I would think that restarting the whole integration should also work, but maybe Iā€™m hitting an edge case?

Anyway, reload from the service works as expected.

Thanks for the response! I shouldā€™ve read the docs better to avoid creating a faulty test, but Iā€™m still running into the issue when running my actual function (which is regular python). Essentially Iā€™m trying to use pyscript to make API calls to adjust our sleep number bed. I created a python package in <config>/pyscript/modules called sleep_number containing the function set_number. When I import and run the function set_number using task.executor, it seems that the function doesnā€™t run at all. When I instead run the function normally, I get the expected IO loop warnings, but the function runs as expected. This makes me feel somewhat confident that I have all of my import setup correctly at least. Iā€™d be interested in trying @pyscript_compile, but after a long day my brain is struggling to implement it properly. Iā€™m calling the service from `/config/pyscript/sleep.py.

#<config>/pyscript/sleep.py
from sleep_number import set_numbers

@service
def set_sn(left=None, right=None):
    kw = {'left': left, 'right': right}
    
    task.executor(set_numbers, **kw) #set_numbers doesn't run

    set_numbers(*kw) #set_numbers runs, but obviously with HASS IO Loop Warning 

I also noticed Iā€™m getting a RuntimeWarning when calling pyscript.set_sn, but only on the initial service call after restarting the HA server. It doesnā€™t happen on subsequent service calls.

/usr/local/lib/python3.8/asyncio/base_events.py:1860: RuntimeWarning: coroutine 'EvalFuncVar.__call__' was never awaited
  handle = None  # Needed to break cycles when an exception occurs.
RuntimeWarning: Enable tracemalloc to get the object allocation traceback

I havenā€™t had time to look into that yet, but Iā€™ll try that after posting this. Sorry for the long response

from queue import Queue

queue = Queue(maxsize=10)


@service
def talk(message=None, volume=0.6, player=None):
    queue.put({"message": message, "volume": volume, "player": player})


def worker():
    active = True
    while active:
        try:
            # Get data from queue
            data = queue.get()
            log.info(f"Task #{queue.unfinished_tasks-1}")

            if data["player"] == "terminate":
                active = False
                log.info("Terminating tts queue worker thread")
                return

            # Call TTS service
            tts/google_cloud_say(entity_id=data['player'],
                                 message=data['message'])

            task.sleep(15)
            log.info(f"Said {data['message']} on {data['player']}")

        except (TypeError, ValueError, TimeoutError):
            log.info("Warning! Error in worker")
        else:
            queue.task_done()

    log.info("Worker thread exiting")


def terminate():
    queue.put({"player": "terminate"})


task.executor(worker())

I know Iā€™m trying to over-engineer this, but I had it working in appdaemon now I would like to port it to pyscript. But little unsteady on howā€¦ Not sure if this is the right way. What I want to do is put stuff in a queue and then have a thread where a working picks stuff from the queue.

This does not seem to work in above code, maybe there is a better way? Or should I just not go for this mechanism? (Background: I hate getting overlapping TTS msgs on my google devices).

The sleep_number module in pyscript/modules is pyscript code, so once again all the functions are async, and canā€™t be called by task.executor. I should add a check in task.executor to produce a more helpful error message.

Instead, you should move the native-python module sleep_number to a module directory outside of config/pyscript, eg config/pyscript_modules as explained in the docs.

1 Like

It would be better to use async queues, and then thereā€™s no need to use task.executor.

Hereā€™s skeleton code that sends messages to an async queue, and a task runs at startup to process the messages with a 10 second delay:

import asyncio

mesg_q = asyncio.Queue(0)

def send_message(text):
    mesg_q.put(text)
    
@time_trigger("startup")
def process_messages():
    task.unique("process_messages")
    while True:
        mesg = mesg_q.get()
        log.info(f"received mesg {mesg}")
        task.sleep(10)

# test code
send_message("message #1")
send_message("message #2")
send_message("message #3")

If you run this example, the received mesg ... log messages will be 10 seconds apart.

1 Like

Ah, I clearly missed where the docs said pyscript. Thanks for clearing that up!

This is so awesome! I get to create much smaller and readable programs with pyscript :heart_eyes:

@time_trigger only takes strings. So you should format the string (if needed) into a valid format (ie, a string of the form "once(...)", "period(...)" or "cron(...)". The easiest case is if you want a daily trigger at the requested ā€œhh:mm:ssā€, eg:

save_daily_actions_function = None

@state_trigger("input_datetime.my_daily_time")
def got_new_input_time(value=None):
    # dynamically create a trigger at the user's time
    @time_trigger(f"once({value})")
    def daily_actions_at_input_time():
        # do what is needed at the daily time
        ...

    # save the function so that it continues to be an active trigger
    global save_daily_actions_function 
    save_daily_actions_function = daily_actions_at_input_time

If you want to manipulate the date youā€™ll need to convert back to a string before passing to @time_trigger.

1 Like

number_list = range(-5, 5)
less_than_zero = list(filter(lambda x: x < 0, number_list))

On python 3.8.2 gives:

[-5, -4, -3, -2, -1]

Does anybody know why running ā€˜filterā€™ in the pyscript kernel gives this strange result? Is the function filter hijacked?

In pyscript all functions (including lambda) are async. Functions like filter and map, and python packages that take a function argument as a callback, expect a regular python function, so they wonā€™t work with an async pyscript function.

Specifically, in your example, filter calls the (async) lambda function, and it returns a coroutine which is not awaited (ie, not called). Instead it is considered True (since itā€™s not False, 0 or None), so the filter returns every element.

In 1.2.0 that I just released, there is a new decorator @pyscript_compile that makes the function a regular, compiled, python function that can be used wherever a regular python function is needed. However, the lambda syntax doesnā€™t support decorators. The workaround in 1.2.0 is to turn your lambda function into a normal function with a @pyscript_compile decorator.

The real fix is to make lambda functions always compiled, since python doesnā€™t support async lambda functions. Then your example will work correctly. Iā€™ll add that to the todo list.

I committed a change so that lambda functions are compiled:

2 Likes

That is so awesome! And I actually understood what you explained :heart_eyes:
I dabble with code, would not call myself a developer, but learning more every dayā€¦