Running commands on the host without ssh

A question that comes up on these forums is how to schedule/trigger a command on the host system from a container-based HA install. I had this question myself and it seems the usual answer is to give ssh access to the host from the container and use ssh to execute the command.

This opens up a potential attack vector to the host were someone to get access to your HA instance. While i’m not particularly worried about this, I decided to see what would be involved in setting this up without ssh. It turns out to be quite do-able, and seems more secure, using a named pipe.

The following assumes you want to run a script on the host called some_script.sh. In my case this is located in $HOME/bin.

Note: My host OS is Linux (Raspbian v11). I’ll be interested to hear if there are any wrinkles with other host OS setups (Mac, Windows).

Step 1: Create a named pipe

On the host:

cd /usr/share/hassio/homeassistant
sudo mkdir pipes
cd pipes
sudo mkfifo host_executor_queue

I’ve chosen to name the pipe host_executor_queue but you can name yours whatever you like.

Step 2: Create a script for monitoring the pipe

Again on the host, put this script somewhere and make it executable. Mine is $HOME/bin/monitor_ha_queue.sh:

#!/bin/bash

pipe=/usr/share/hassio/homeassistant/pipes/host_executor_queue

while true; do
  if read line < $pipe; then
    case $line in
      some_script)
        $HOME/bin/some_script.sh
        ;;
    esac
  fi
done

You can add a cron job to start this on boot (you may have to set $HOME in your cron file):

@reboot $HOME/bin/monitor_ha_queue.sh

and start it right away via

nohup $HOME/bin/monitor_ha_queue.sh >&/dev/null &

Step 3: Add a shell command to your HA configuration

Add the following to your configuration.yaml:

shell_command:
  some_command: echo some_script > /config/pipes/host_executor_queue

and restart HA.

Step 4: Set up the automation in HA

At this point the shell command is available in HA, so e.g., you can add a script via the UI with action “Call Service > Shell Command: some_command” and trigger this script however you like.

Adding more commands

Going forward, adding additional commands is easy:

  1. Add shell command to configuration.yaml.
  2. Add another case to the switch statement in monitor_ha_queue.sh and restart the monitor.
  3. Restart HA.
8 Likes

Is there a way to troubleshoot/check logs where it goes wrong if it does not work?

I created these scripts, but nothing happens if i call the shell script.
Testing it from the commandline, echo to the named pipe gives an access denied message :upside_down_face:

Are you testing from the host? You’ll need to test from the container or run as root: sudo -s to open a shell as root. You can also add logging to the monitor_ha_queue.sh script and run it in the foreground.

Could you give an example of modifying the “monitor_ha_queue.sh” script if I need to add more commands? And one more question, am I correct that I should have a separate “some_scriptXX” file for each command I want to add?

This is a case/switch statement:

    case $line in
      some_script)
        $HOME/bin/some_script.sh
        ;;
    esac

When the scheduled task in HA runs this shell command:

echo some_script > /config/pipes/host_executor_queue

that causes the pipe to receive the string “some_script” and that case statement sees it and runs the some_script.sh script.

So if you add a new command from the HA side:

echo do_that_thing > /config/pipes/host_executor_queue

then you can catch this in the case statement like this:

    case $line in
      some_script)
        $HOME/bin/some_script.sh
        ;;
      do_that_thing)
        ...here goes the code you want to run...
        ;;
    esac

As to your second question: the way I keep my tasks organized, yes, I have a script for each command. It can be called whatever you want:

    case $line in
      some_script)
        $HOME/bin/some_script.sh
        ;;
      do_that_thing)
        /some/path/to/my_special_script.sh
        ;;
    esac

but you don’t have to use a script, you can put whatever shell command there you want.

1 Like

Thank you for such a detailed explanation!

1 Like

Thank you for this, I was trying to find a way to spindown my host attached USB HDD with hdparm from HA. I have 2 issues with the above method though;

  1. The command repeats and never gets cleared from the queue file, should the monitoring script not clear it?
  2. After I manually cleared the queue file, the host CPU usage went crazy, I think the script was looping it as fast as it could. I had to kill the process. Any suggestion on a way to limit it?

Sounds like you are using a regular file and not a named pipe. You want to use a named pipe with this setup, which ensures the command is read exactly once and then removed from the “file”. See Step 1.

1 Like

Trying to use it, but get the following error in the home assistant log:

Timed out running command: echo apc_sua1000xli_unmute > /config/pipes/host_executor_queue, after: 60s
Traceback (most recent call last):
File “/usr/local/lib/python3.10/asyncio/streams.py”, line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/components/shell_command/init.py”, line 87, in async_service_handler
stdout_data, stderr_data = await process.communicate()
File “/usr/local/lib/python3.10/asyncio/subprocess.py”, line 195, in communicate
stdin, stdout, stderr = await tasks.gather(stdin, stdout, stderr)
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/components/shell_command/init.py”, line 86, in async_service_handler
async with async_timeout.timeout(COMMAND_TIMEOUT):
File “/usr/local/lib/python3.10/site-packages/async_timeout/init.py”, line 129, in aexit
self._do_exit(exc_type)
File “/usr/local/lib/python3.10/site-packages/async_timeout/init.py”, line 212, in _do_exit
raise asyncio.TimeoutError
asyncio.exceptions.TimeoutError

If to run the script directly in the terminal on the host it works

Looks like you don’t have the monitor script running. If there is nothing reading from the queue, anything trying to write to it will block. FYI you can test the monitor script by running it in the foreground:

$HOME/bin/monitor_ha_queue.sh

It works!!! Were errors in the “monitor_ha_queue.sh”. I had forgotten the closing parentheses at the end of a line in new cases and it prevented the script from running. Thank you!

1 Like

Thanks for that great suggenstion. I messed around with ugly SSH from container to the host and found this thread. Am I correct to create the named pipe inside the path of the volume mounted in the container?

My usecase is a bit different, but I think this will work with some of your help.
I have a shell_command with an argument.

shell_command:
  update_container: /bin/bash /config/shell_command/update_container.sh {{ container_name }}

Script is for updating docker containers including homeassistant itself. It cannot run inside the container, as it would stop itself by stopping the container. I think running this on the host would work. But no idea how to give and process the argument from homeassistant to the monitor-script on the host.

Maybe I would change my shell_command like this and pass the “container_name” to the service call as data.

shell_command:
  update_container: echo update_container {{ container_name }}

But how can the monitoring script evaluate the argument and pass it to the script?
A simple idea would be, to define cases like update_container <container_name> for each container in the monitoring script and hardcode the arguments. But this seems not well maintainable.
Maybe you have an better idea how to modify the monitoring script.

Great, thanks for sharing this @navels !!

I only tweaked your script a bit in my env in order to get it generic and allow any script to be launched without being required to update and relaunch the monitoring pipe script for each new script you’d need and also getting default logging. I just expect any command send to the pipe is matching with a script existing in the HA folder. :

#!/bin/bash
ha_home=/home/user/homeassistant
pipe=$ha_home/pipes/host_executor_queue

while true; do
  if read line < $pipe; then
        $ha_home/$line.sh >> $ha_home/$line.log 2>&1
  fi
done

so

shell_command:
  create_zip: 'echo create_zip > /config/pipes/host_executor_queue'

will trigger the create_zip.sh script within home assistant home folder

btw, the nohup was not immediately recognized on my ubuntu host, just running it with ./$HOME/bin/monitor_ha_queue.sh >&/dev/null & and added @reboot $HOME/bin/monitor_ha_queue.sh into crontab -e

2 Likes

I’m confused by this on multiple levels.

First: on the host, there is no /usr/share/hassio/homeassistant folder
So I assumed I needed to create it.
Then the pipe at /config/pipes/host_executor_queue doesn’t exist, because I put it in the newly created folder.

I’m new to docker, but then I went into my container with this command:
docker exec -it homeassistant /bin/bash

and when I try the echo some_script > /config/pipes/host_executor_queue command, it can’t find the pipe. So I decided to make the pipe in my home assistant config folder under a new folder pipes. and direct to it instead

echo some_script > pipes/host_executor_queue

I’ve had no luck thus far. Here is the example of my machine

echo echo_world > pipes/host_executor_queue

which is this in the script.

#!/bin/bash

pipe=/home/user_name/.homeassistant/pipes/host_executor_queue

while true; do
  if read line < $pipe; then
    case $line in
      echo_world)
        echo 'hello world' > output.txt
        ;;
    esac
  fi
done

but it doesn’t work

How did you install HA? Where is your configuration.yaml file located?

I solved it, I was a permission issue with the output.txt location. I installed with Docker, config folder is in /home/username/.homeassistant. I was actually coming back to this message to delete my question. but For anyone else struggling with pipes

I followed these steps:
my configuration.yaml file is in ~/.homeassistant

sudo mkdir ~/.homeassistant/pipes
cd ~/.homeassistant/pipes
sudo mkfifo host_executor_queue
sudo mkdir ~/.homeassistant/pipe_output
sudo chown username:username ~/.homeassistant/pipe_output
sudo nano $HOME/bin/monitor_ha_queue.sh

It seems that for it to work, the folder that the temp.txt was written to must be owned by the admin user, not root
Then I made the .sh file same as above with the addition of the output.txt

#!/bin/bash

pipe=/home/username/.homeassistant/pipes/host_executor_queue

while true; do
  if read line < $pipe; then
    case $line in
      process_check)
        ps -e | grep -oc 'process_name' > /home/username/.homeassistant/pipe_output/output.txt
        ;;
    esac
  fi
done

then my shell command is

echo process_check > /config/pipes/host_executor_queue ; cat /config/pipe_output/temp.txt

I did notice that there is a delay in getting a response, so my shell command required a sleep to let the write to the temp.txt happen. which is:

echo echo_world > /config/pipes/host_executor_queue ; sleep 0.5 ; cat /config/pipe_output/temp.txt

This did what I needed

Of course, this is a ridiculous process for something that I am assuming exists as an integration within HA itself, so I am all ears for other ideas

For anyone interested, this SSHCommand might be intersting, as it won’t require to open any SSH ports on your router. The SSH command towards your host server (running the HA Docker) will be launched within your local network.

this is awesome, very clever and worked so easily, thanks for sharing!

1 Like