My bad, Nick - any thing not out of the standard install was perhaps what I was referring to. Thanks for pointing out - terminology is important.
Glad you got it all going
Hello @pnbruckner and everyone,
is it just me or the new automations & scripts running modes doesn’t work at all?
I’m using the following setup to test them:
# Example configuration.yaml entry
input_boolean:
lightbulb:
name: dummy light bulb
icon: mdi:lightbulb
# scripts.yaml entry
turn_the_light_for_5_sec:
alias: Turn the light for 5 sec
icon: mdi:toggle-switch
mode: restart
sequence:
- data: {}
entity_id: input_boolean.lightbulb
service: input_boolean.turn_on
- delay: 00:00:5
- data: {}
entity_id: input_boolean.lightbulb
service: input_boolean.turn_off
so when I execute the script once, it works as expected, but when I execute it twice within the 5 sec delay the dummy light bulb stays on and never turns off.
if I execute the script for the third time the dummy bulb gets turned off after 5 more seconds
It doesn’t matter what running mode is chosen (single, queued, parallel) the results are the same.
Tested on 0.113.1 running on docker.
p.s. I also test this against real switch over MQTT and the results are the same
Was it just me, or it seems this update broke the can_cancel attribute that you can use to customize scripts, for it to appear as a switch-like in the UI, rather than “EXECUTE”/“CANCEL”?
Can anyone confirm?
Thanks in advance.
Correct, it no longer shows as a switch on the frontend, but the wording changes for an active script from “EXECUTE” to “CANCEL” and still works the same way.
@pnbruckner Is this an expected change (not documented, as far as I can tell), or a regression from the recent script additions?
same here, everything seems to start fine with 0.113.1 but than automatic rollback after 10 mins, no clue why
The can_cancel
attribute was only present in scripts that contained delays or waits, because those were the only ones that could “suspend.” This is no longer the case. All scripts can now be canceled. Hence there was no longer a need for the attribute. I believe there are other, more meaningful attributes now, but I didn’t do that. The can_cancel
attribute, and now the new ones, have more to do with how the frontend works, and I didn’t implement that part, so I’m not as familiar with it.
The script works, and calling the script a second time does indeed cause it to restart. The script is working as it should.
How are you causing the script to execute? If you do it via the frontend, you may not have noticed that after you click the EXECUTE button it changes to CANCEL. So when you click again you’re not starting the script again, you’re stopping it. (See @pplucky’s post just after yours.)
This is a frontend thing. If you think it should work differently, then you should create a feature request topic. I don’t work on the frontend so there’s nothing I can do about it.
And to convince yourself that the script is actually doing what it should, first make sure the input_boolean
is off, then go to the SERVICES page, select the script from the Service drop down list, and click CALL SERVICE. Wait a couple of seconds, then click CALL SERVICE again. Now go look at the history of the input_boolean
. It would have been on about 7 or 8 seconds depending on exactly when you clicked CALL SERVICE the second time.
Thank you for you clarification. You are completely right I was executing the script from a button card added in the frontend and it was canceling the script exactly as you said.
I’m going to release a video tutorial about automations & scripts this Wednesday can I give you some credits in the video?
Thank you one more time you guys are doing great job with Home Assistant.
I think can_cancel attribute was valid for any script and it was just a means of allowing cancellation of while it was running.
You’re right that now it doesn’t seem like needed anymore, but I was just used to it.
I will try to find out those new attributes myself, thanks for the clarification.
I took a quick look at that part of the code. The can_cancel
attribute used to only be present if it was true, which was only the case for scripts that contained a delay or wait.
The new attributes are: mode
, max
& current
. The last two are only present for parallel
& queued
modes. The same thing goes for automations. (current
indicates the total number of currently running or queued up “runs.”)
I have the exact same problem and if I were to migrate to new configuration all rfxtrx-devices end up as switches and sensors. I also have configuration for sensors and lights for instance using includes.
sensor:
- platform: rfxtrx
devices:
0a52029d5d0100e25a0359:
name: oregon_thgr810_1
data_type:
- Temperature
- Humidity
light:
- platform: rfxtrx
signal_repetitions: 2
devices:
0b11000210bc0cfe02010f70:
name: nexa_eycr_2300_1
Is it possible to configure devices as lights or senors with the new RFXTRX-configuration or do I have to set up lights from switches and make new template sensors?
Node-RED is a nice way to visually setup automation. The fact that you can watch it work is nice. Wiring up debugs to check outputs. I’ve used both Node-RED and HA automation. For folks newer to HA or those folks trying to do a complex automation Node-RED a great tool to follow what is going on with the automation.
I’m not advocating use of one over the other. Just trying to give some context as to why some may prefer Node-RED. For those that live in YAML and love diving into logs then the HA automation works just fine.
I personally find myself liking Node-RED at the moment due to the ease of debugging automation. Just my take. That and 2 cents will get you… um, pretty much nothing anymore.
I have the same issue since that update, and don’t understand the answers
None of all my sensors and switches work, and rfxtrx is used by 90% of my automations, so it becomes critical !
In configuration.yaml :
rfxtrx:
#device: /dev/ttyUSB1 None of the 2 device modes works
device: /dev/serial/by-id/usb-RFXCOM_RFXtrx433XL_DO3382P0-if00-port0
debug: false
in sensor.yaml for example:
- platform: rfxtrx
automatic_add: true
devices:
0a520d193a0100c9350169:
name: "Garage"
data_type:
- Temperature
- Humidity
0a520da3740300cb330169:
name: "SdB"
data_type:
- Temperature
- Humidity
Can someone give me what is “wrong” ?
I’ve run the config check to verify 0.113.1 and the validation has failed with out verifying the config at all.
The error log shows the config check was having problems.
Installing build dependencies: started
Installing build dependencies: finished with status ‘done’
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status ‘done’
Preparing wheel metadata: started
Preparing wheel metadata: finished with status ‘done’
Collecting chardet<4.0,>=2.0
Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Requirement already satisfied: cffi!=1.11.3,>=1.8 in /usr/local/lib/python3.7/site-packages (from cryptography==2.9.2->homeassistant) (1.14.0)
Any ideas??
Having same issue - no error in logs (even with debug), but HA never finishes loading. I am on Supervised…
I am on core - but not sure that is what matters. I went into config yaml and commented out all integrations and then when to ‘.storage/core.config_entries’ and disabled nearly all integrations there as well. Restarted and confirmed success. One by one (or sometimes two by one) and reenabled integrations until I found the one that hung HA.
Decent thread if you can follow it here - https://discord.com/channels/330944238910963714/551864459891703809/736606977525940254
For me, the issue was onvif integration, there is already ticket:
Once I removed onvif integration HA restarted just fine…
This made me laugh a little bit xD.