After a half day of work with the Shutdown shell_command for WOL Switch, I want to share my findings so that maybe it can be helpful for others.
First I tried the RPC add-on, which did not work for me, since I’m using Windows 11, and for what ChatGPT told me, this is not supported anymore since Win10. Then I jumped over to the shell_command implementation. I installed the OpenSSH server via the additional feature option and needed another few hours of checking configurations on both sides because I couldn’t get a successful connection from the pi to the PC. It turns out that, because I tested this with a user who has admin rights on my PC, and since I wanted to use a key file, I needed to put the public key into the administrator authorized keys file under C:\ProgramData\ssh\administrators_authorized_keys and set the permission for administrators like this:
icacls "C:\ProgramData\ssh\administrators_authorized_keys" /inheritance:r
icacls "C:\ProgramData\ssh\administrators_authorized_keys" /grant BUILTIN\Administrators:F
icacls "C:\ProgramData\ssh\administrators_authorized_keys" /grant SYSTEM:R
and within the sshd_config the following options need to be enabled:
Port 22
PubkeyAuthentication yes
PasswordAuthentication no
After this, I could successfully use ssh from the pi to log into my pc and run the shutdown command:
shutdown_pc: "ssh USER@IP_PC 'shutdown /s /f /t 0'"
But when I used the switch, nothing happened. So I checked the command by running it from the developer tools under the option Actions and got the error message:
stdout: ""
stderr: Host key verification failed.
returncode: 255
I then changed the shell command to write everything into a debug.log like this:
shutdown_pc: "ssh -v -o LogLevel=DEBUG USER@IP_PC 'shutdown /s /f /t 0' > /config/ssh_debug.log 2>&1"
There I could read that it can not find the private key file:
OpenSSH_9.7p1, OpenSSL 3.3.1 4 Jun 2024
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: Connecting to 192.168.0.XXX [192.168.0.XXX] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_9.7
debug1: Remote protocol version 2.0, remote software version OpenSSH_9.7
debug1: compat_banner: match: OpenSSH_9.7 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 192.168.0.XXX:22 as 'user'
debug1: load_hostkeys: fopen /root/.ssh/known_hosts: No such file or directory
debug1: load_hostkeys: fopen /root/.ssh/known_hosts2: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: [email protected]
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: aes128-ctr MAC: [email protected] compression: none
debug1: kex: client->server cipher: aes128-ctr MAC: [email protected] compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:p5dJJ....rest_of_the_key
debug1: load_hostkeys: fopen /root/.ssh/known_hosts: No such file or directory
debug1: load_hostkeys: fopen /root/.ssh/known_hosts2: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory
debug1: hostkeys_find_by_key_hostfile: hostkeys file /root/.ssh/known_hosts does not exist
debug1: hostkeys_find_by_key_hostfile: hostkeys file /root/.ssh/known_hosts2 does not exist
debug1: hostkeys_find_by_key_hostfile: hostkeys file /etc/ssh/ssh_known_hosts does not exist
debug1: hostkeys_find_by_key_hostfile: hostkeys file /etc/ssh/ssh_known_hosts2 does not exist
debug1: read_passphrase: can't open /dev/tty: No such device or address
Host key verification failed.
Especially the line “debug1: identity file /root/.ssh/id_rsa type -1” triggered me.
Since I used the pi configuration tool to write the HAOS to my SD card, I didn’t know that HA is running as a docker container and after to try this command:
docker exec -it homeassistant bash
I was within the container and checked the /root/.ssh folder, which was not present.
So I created the folders and copied everything over to the docker container with the following commands:
Create needed folders within the docker:
mkdir -p /root/.ssh
chmod 700 /root/.ssh
Outside docker:
docker cp /path/to/your/ssh_keys/id_rsa homeassistant:/root/.ssh/id_rsa
docker cp /path/to/your/ssh_keys/id_rsa.pub homeassistant:/root/.ssh/id_rsa.pub
docker cp /path/to/known_hosts homeassistant:/root/.ssh/known_hosts
Inside docker again
chmod 600 /root/.ssh/id_rsa
chmod 644 /root/.ssh/id_rsa.pub
chmod 644 /root/.ssh/known_hosts
After this mod, the shell_command did successfully run, and I’m now able to shut down my PC with the WOL switch.
Update:
Not yet perfect. After a core update, the keys in the container were lost because a new docker container was pulled.
We have to improve this. Since the PI HA OS is very strict and don’t let us mount the
/data/.ssh folder to /root/.ssh within the container, nor let us add a docker hook, we will copy the keys to a location that is also available in the container. Then we create a script that creates the required path, copies the files and sets the permissions. At least we create an automation that checks for a changed version of the HA core and, if there is a new version, executes our script.
Create the path, copy the files and create the script:
mkdir -p /config/sol/ssh_keys
cp /data/.ssh/* /config/sol/ssh_keys
chmod 700 /config/sol/ssh_keys && chmod 600 /config/sol/ssh_keys/*
nano /config/sol/restore_ssh_keys.sh
Copy the following into your script:
#!/bin/bash
# Target folder in the container
TARGET_DIR="/root/.ssh"
SOURCE_DIR="/config/sol/ssh_keys"
echo "Start copying the SSH keys within the container..."
# Check whether the source folder exists
if [ ! -d "$SOURCE_DIR" ]; then
echo "Error: The source folder $SOURCE_DIR does not exist. Abort."
exit 1
fi
# Create folder if not available
if [ ! -d "$TARGET_DIR" ]; then
echo "Create the target folder $TARGET_DIR..."
mkdir -p "$TARGET_DIR"
if [ $? -ne 0 ]; then
echo "Error: Destination folder $TARGET_DIR could not be created. Abort."
exit 1
fi
else
echo "Destination folder $TARGET_DIR already exists."
exit 0
fi
# Copy files
echo "Copy SSH keys to $TARGET_DIR..."
cp -r "$SOURCE_DIR/." "$TARGET_DIR/"
if [ $? -ne 0 ]; then
echo "Error: SSH keys could not be copied. Abort."
exit 1
fi
# Set permissions
echo "Set permissions for the target folder..."
chmod 700 "$TARGET_DIR"
if [ $? -ne 0 ]; then
echo "Error: Permissions for $TARGET_DIR could not be set. Abort."
exit 1
fi
chmod 600 "$TARGET_DIR"/*
if [ $? -ne 0 ]; then
echo "Error: Permissions for files in $TARGET_DIR could not be set. Abort."
exit 1
fi
echo "SSH keys successfully copied and permissions set."
exit 0
Make the script executable:
chmod +x /config/sol/restore_ssh_keys.sh
Create the shell command in your configuration.yaml:
shell_command:
restore_ssh_keys: "/config/sol/restore_ssh_keys.sh"
Exit the shell and restart home assistant.
Install the integration “Version”, choose “local” and click “add”.
Create a new automation and paste the following code into:
alias: Restoring SSH keys after a core update
description: >-
Copies the SSH keys back into the Docker container after a core update.
triggers:
- entity_id: sensor.current_version
trigger: state
conditions:
- condition: template
value_template: "{{ trigger.to_state.state != trigger.from_state.state }}"
actions:
- action: shell_command.restore_ssh_keys
data: {}
Update 2:
There was a Core Update, and the automation got not triggered.
For some reason, the state of the sensor.current_version will not trigger. Probably because there is no attribute named state.
However, I changed it to a variable where we will store the last current Core version, and we’re going to trigger the automation every 10 minutes.
We compare the current version with states() against the states() of the stored previous version within our variable.
If the version has changed, we run the script and store the new version.
I’ve tested it, and it works now.
So create the input_text variable in the configuration.yaml:
# Input text section.
input_text:
# Stores the current Core version.
last_core_version:
name: "Last Core Version"
initial: "unknown"
Change the automation to the code down below:
alias: Restoring SSH keys after a core update
description: >-
Copies the SSH keys back into the Docker container after a core update.
triggers:
- minutes: /10
trigger: time_pattern
conditions:
- condition: template
value_template: >-
{{ states('sensor.current_version') !=
states('input_text.last_core_version') }}
actions:
- action: shell_command.restore_ssh_keys
data: {}
- action: input_text.set_value
target:
entity_id: input_text.last_core_version
data:
value: "{{ states('sensor.current_version') }}"
I leave the whole story unchanged for historical reason.
Happy automation and happy scripting!