Any ideas on how to automate a binary sensor state change via an automation?

The setup:
Motion sensors that have motion + occupancy
Binary Sensor Groups (made in Helpers) to make it easier to manage/add/remove multiple sensors within a single area and automations built around the group states as a result.
If I have 4 sensors in a room then any one of them can trigger the group states to: ON

What I’m trying to do:
On start-up, have HA reset the helper group states back to: OFF

Sometimes my HA is offline for a bit and by the time it starts up, the sensor state is stale or there’s a delay in a motion sensor re-joining the mesh which can mess up a group state.

Being able to reset all my group states back to: OFF would save me walking around the house after a restart just to make sure every motion sensors goes through their full motion/occupancy on/off cycle so it doesn’t mess up the helper groups (as well as some security notifications I have in place).

I could reset each group state manually in Developer Tools | States, but I’d rather automate the action.

CW

You can use Curl command to change the state of an entity… example for changing the state of the “sensor.status_alarm_montreal” to “disarmed”:

curl -d ‘{“state”: “disarmed”}’ http://<local HA IP address>:8123/api/states/sensor.status_alarm_montreal -H ‘Authorization: Bearer <HA long lived token>’

In the automation, you detect the start of HA with the trigger:

trigger:
    - platform: homeassistant
      event: start
action:
    - service: shell_command.reset1

and than in the action you reset your binary _sensors using curl commands (via shell commands define in configuration.yaml):

shell_command:
  reset1: curl -d '{"state": "disarmed"}' http://<local HA IP address>:8123/api/states/sensor.status_alarm_montreal -H 'Authorization: Bearer <HA long lived token>'

==================================================================

or use this python script as explained in this (pretty long) post::

Thanks
It’s a shame this isn’t part of HA core.

I understand that sensors shouldn’t normally be manually changed, but it would be useful to be able to (or at least the ones created in the helper) in case that the source sensor state is stale/wrong.

Last time it happened my HA was power cycled just before I went out (I was late for work).
My HA occupancy knew noone was home, but my stale sensors said there was occupancy so I was getting all kinds of messages from HA. At least I know the security works :slight_smile:
In the end I had to trigger my RoboVacuum to do a cleaning cycle just to reset the motion sensors

Hey, I’m definitely new to this so sorry for the extra question. Experiencing some frustration as none of the docs seem to actually go step by step, and I’m spelunking to try and understand the full pipeline of running a script!

I added the shell_command: definition you gave to configuration.yaml. I filled in my HA address and my auth token as generated in the settings. I changed the sensor entity id to match my entity id for the sensor I’m trying to set (and the state value to ‘off’ as that matches my sensor’s state)

I restarted my container to pull in the new configuration.

But when i try to invoke it in the Services tab, it says it cannot find shell_command.reset1 .

My configuration.yaml:

# Loads default set of integrations. Do not remove.
default_config:

python_script:

shell_command:
  reset1: curl -d '{"state": "off"}' http://HAHOST:8123/api/states/binary_sensor.main_lock_access_control_lock_jammed -H 'Authorization: Bearer TOKENHERE'

automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml

Am I missing a step…? Or totally misunderstanding how to invoke it? This is a Dockerized HA running on an Asustor NAS.

Again, apologies. I swear I’ve been staring at the docs, they just keep saying “add this to config.yaml and then run it!” and it feels like they’re leaving out details. grin

FWIW, that python_scripts bit is there because I was trying to run the python script version you linked as well, and I got decently far. Far enough to know the configuration is taking effect - just ran into similar “how do the Lego pieces fit together?” issues.

@kotov_syndrome What I would recommend is that you save the curl command in a file in /config for example with the extension .sh… You have to change the protection (via a terminal) of that file to 700 (to be executable):

So the file will be:

name: reset1.sh
content of the file:

#!/bin/bash 
#
#

curl -d '{"state": "off"}' http://HAHOST:8123/api/states/binary_sensor.main_lock_access_control_lock_jammed -H 'Authorization: Bearer TOKENHERE'

Then from a terminal you can test the execution of the file by typing:

/config/reset1.sh

When you are satified by this shell script you can create your command in the configuration.yaml file (mention that you have the results of the execution in a log file…)

shell_command:
  reset1: /bin/bash /config/reset1.sh >&/config/reset.log 2>&1

Just a reminder that your shell script should not exceed the limit of 60 seconds elapsed (this one will surely not), these 60 seconds are a contraint within home assistant… If you want to work around that , you have to create an “ini” shell script which is just spawning the script exceeding the 60 seconds… So in your case (you name it the way you want … ini is just a convention I took):

You create an ini shell script called for instance reset1_ini.sh (don’t forget the file protection to 700 to be executable):

#!/bin/bash
#
#

bash /config/reset1.sh &>> /config/reset1.log &

and the shell command is becoming (no need for a logfile in this case as the ini shell script is just spawning the script you want). Just an additional remark, in this case you can pass arguments to the shell script from home assistant (it is not the case when you redirect the output to have a logfile !, it seems that arguments and redirection “>” are not compatible:

shell_command:
  reset1: /bin/bash /config/reset1_ini.sh

If you want to pass arguments to the shell script, the shell command is becoming:

shell_command:
  reset1: /bin/bash /config/reset1_ini.sh  "{{ arguments }}"

When you use the shell_command, it is becoming:

    - service: shell_command.reset1
      data_template:
        arguments: your_arguments

That’s it… Feel free to ask questions…

1 Like

I could kiss you!!! You helped me make so much progress, thank you so much for taking the time to give some “baby steps.”

I used the shell command template to make a GET to api/states, which gave me a ton of information on the entity that helped improve the payload. With the base call and just a state, the friendly name for the binary sensor was being overridden and it was causing some funky stuff, but I updated it to pass along more info and it’s now working great from the terminal.

This is my sh script now, called ‘jamres’:

curl -d '{"state":"off", "attributes":{"device_class":"problem", "icon":"mdi:check-bold", "friendly_name":"Jammed"}}' http://HAHOST/api/states/binary_sensor.main_lock_access_control_lock_jammed

I made a jamcause.sh script that sends the same thing but with “on”, so I can test swapping between states. This works great!!! No need for the ini trick yet. And no arguments yet, although I want to dig into that later.

Now the next problem… calling this in HA as part of an automation/trigger. :frowning:

I put the following in my configuration.yaml:

shell_command:
  jamres: /bin/bash /config/jamres.sh >&/config/jamres.log 2>&1

but lost on the next step. If I try to access shell_command.jamres via the Services tool in the HA UI, it repeats that “cannot find” error like it doesn’t know what it is. I restarted HA to pull in the new configuration/etc.

Think I’m missing something still?

If I understand correctly you are using homeassistant with docker, right ?

So it seems that your file is not at /config but there are some directories between the root directory “/” and the “config” directory… Where do you installed HA ? What installation procedure do you use ?

Maybe try instead of “/config/jamres.sh” this “~/config/jamres.sh” … do the same for the logfile, put a “~” in front of the location of the logfile…

The examples I gave were for HA on Hass.OS…

You were exactly right on the file system mismatch, which lead me to a much more important realization… the dang docker container was mounted on the wrong volume on my NAS, meaning it wasn’t reading the configuration. That’s why the service wasn’t showing up.

It is now!!!

I’ve got some reconfiguration to do now that the Docker container is mounted right, but the pipes are all connected now. I can figure the rest out through trial & error now that you got me this far.

Truly, thank you.

EDIT:

Small detail for other Docker Container HA users - I had to change my configuration.yaml to instead read this way in order to run the .sh script. Might just be a quirk of my system, but wanted to report back.

shell_command:
  jamres: sh ./jamres.sh
1 Like