Tips for shell_command and syncing snapshots to windows network share

Once I got my shell_command script to delete snapshots older than 7 days so my disk doesn’t fill up, I spent some time figuring out how to sync my snapshots to a Windows network share. It wasn’t too bad, once I figured out how to mount an SMB share. Everything I’ve done is available in HASSIO “out of the box”, meaning I didn’t have to install anything.


Essentially, we just need to mount the local device holding the snapshots and mount the SMB network share, then run a loop to do the sync and umount both when done.

Mounting the local device:

mkdir /mnt/data
mount /dev/sda8 /mnt/data

Mounting the Windows network share:

mkdir /mnt/smb
mount -t cifs -o "username=xxxx,password=xxxx" //192.168.1.10/share-name /mnt/smb

To keep the username and password out of the script and out of configuration.yaml I leveraged secrets.yaml for the shell_command to be called.

configuration.yaml

shell_command:
  manage_snapshots: !secret snapshot_command

secrets.yaml

snapshot_command: "/config/tools/manage_snapshots.sh //192.168.1.10/share-name redacted-username redacted-password"

In that command, use your appropriate IP or hostname, share name, username and password.

IMPORTANT: if you have debug logging enabled, homeassistant will log the command line that is called when there is output generated from the command, so your secrets are still not completely safe with this approach. You can change the script to not generate any output if you need to avoid leaking secrets in the homeassistant log when you have debug logging enabled.


It’s also worth pointing out are how my script gets the friendly name of the snapshot and determines the age of the snapshot. The friendly name is important because that’s how I distinguish automated snapshots vs. manual snapshots (I have “automated” in the friendly name).

Get friendly name of snapshot without extracting to filesystem:

NAME=`tar xfO abcdef012.tar ./snapshot.json | jq .name`

The magic of the above statement is extracting snapshot.json from the tar directly to stdout and piping it to jq to get the name value. BTW, jq is powerful JSON manipulation if you haven’t heard of it.


Get age of snapshot file:

AGE="$((($(date +%s) - $(date +%s -r abcdef012.tar))/86400))"

The above statement takes today’s date in seconds and subtracts the file date in seconds, then divides that by the number of seconds in a day.


Now for the full script.

This is exactly what I have running in my system. It is a super-set of the original cleanup script in my previous post with some additional handling for existing folders and if the mounts already exist. I wrapped some of the logic into functions to make the main logic easier to follow.

I’m doing an “echo” sync, which is to ensure remote mirrors local. If you use this script, be aware that any tar files on remote that don’t exist on local will be deleted. Obviously you can change the behavior to suit your needs.

There are two loops in this script. The first loop checks local tar files to see if they need to be copied to remote, or are too old and need to be deleted from local and remote. The second loop checks remote tar files to delete any that don’t exist locally.

If you’ve read my other post for the auto-delete script, you may notice that I’m using ls -t $BACKUP/*.tar for my local file loop instead of the find command with -mtime +7. This is for 2 reasons. First, I have to loop all the files, not just the old ones, so I can copy new ones to the remote share. Second, I’m using ls -t to sort the files by date so that my log shows them in order. I just didn’t like the default alphabetic sort making the ages out of order in my output.
/config/tools/manage_snapshots.sh

#!/bin/bash

echo "Managing snapshots: retaining manual, deleting automated > 7 days ago and echoing changes to remote storage."

SMB=$1
USR=$2
PWD=$3
DISK="/dev/sda8"
MOUNT1="/mnt/data"
MOUNT2="/mnt/smb"
BACKUP="$MOUNT1/supervisor/backup"

# this function makes directory if missing, or aborts the script on failure
make_dir () {
	if [ ! -d "$1" ]; then
		echo "  # creating $1 folder for mounting"
		mkdir "$1"
		if [ $? -ne 0 ]; then
			echo "  ! failed to create $1 folder"
			exit 1
		fi
	fi
}

# this function mounts a local device if not already mounted, or aborts the script on failure
mount_dev () {
	make_dir "$1" || exit 1
	if ! mountpoint -q "$1"; then
		echo "  # mounting $2 at $1"
		mount "$2" "$1"
		if [ $? -ne 0 ]; then
			echo "  ! failed to mount $2 at $1"
			exit 1
		fi
	fi	
}

# this function mounts a windows network share (SMB) if not already mounted, or aborts the script on failure
mount_smb () {
	make_dir "$1" || exit 1
	if ! mountpoint -q "$1"; then
		echo "  # mounting $2 at $1"
		mount -t cifs -o "vers=3.0,username=$3,password=$4" "$2" "$1"
		if [ $? -ne 0 ]; then
			echo "  ! failed to mount $2 at $1"
			exit 1
		fi
	fi	
}

# this function will unmount any mount point if it's a valid mount
umount_any () {
	if mountpoint -q "$1"; then
		echo "  # unmounting $1"
		umount "$1"
	fi
	# I decided not to bother deleting mount points. It's not really necessary because they will get re-used.
	# if [ -d "$1" ] && ! mountpoint -q "$1"; then
	# 	echo "  # removing mount point $1"
	# 	rmdir "$1"
	# fi
}

mount_dev "$MOUNT1" "$DISK"
mount_smb "$MOUNT2" "$SMB" "$USR" "$PWD"

#review local files to delete old automated snapshots and echo new/delete
for i in `ls -t $BACKUP/*.tar`
do 
  NAME=`tar xfO $i ./snapshot.json | jq .name`
  AGE="$((($(date +%s) - $(date +%s -r $i))/86400))"
  REMOTE="$MOUNT2/$(basename $i)"
  if [[ $NAME == *Automated* && $AGE -gt 7 ]]; then 
    echo "  - deleting $(basename $i) from $AGE days ago -> $NAME"
    rm $i
    if [ -f "$REMOTE" ]; then
    	echo "  - echoing  deletion of $(basename $i) to remote storage"
    	rm $REMOTE
    fi
  else
    echo "  + keeping  $(basename $i) from $AGE days ago -> $NAME"
    if [ ! -f "$REMOTE" ]; then
    	echo "  > echoing  $(basename $i) to remote storage"
    	cp $i $REMOTE
    fi
  fi
done

#review remote files to delete obsolete remote files we no longer have locally
for i in `ls -t $MOUNT2/*.tar`
do
	LOCAL="$BACKUP/$(basename $i)"
	if [ ! -f "$LOCAL" ]; then
		echo "  - echoing  deletion of $(basename $i) to remote storage"
		rm $i
	fi
done

umount_any "$MOUNT1"
umount_any "$MOUNT2"


4 Likes

After posting this I discovered homeassistant will log the command line that is called when there is output generated from the command, so your secrets are leaked into the homeassistant log if you have debug logging enabled. You can change the script to not generate any output if you have debug logging enabled and need to avoid leaking. I edited the post to mention this.

Also annoying, the output from shell_command escapes line breaks and is ugly to look at in the homeassistant log.

Regarding both issues I mentioned above, I found a reasonable workaround. The script can redirect stdout and stderr to a log file so that we don’t trigger the logging by homeassistant with our secrets, and we get our intended line breaks.

#simple trick to redirect stdout and stderr to a log file from inside a script:
{
  #main script content goes here
} >> "/config/logs/$(basename $0).log" 2>&1

The curly braces around the script content make it an anonymous function and we can redirect all of the output from that function.

I originally wanted to write the log file to the backup folder, but since we unmount it in the main script we don’t have access to it. I created a logs folder in my config folder, since that folder is available to us when the script runs.

The $(basename $0) part of the filename is giving us the name of our script file. You can skip that and use a literal name, but I like having it figure that out for me.

The 2>&1 part is redirecting stderr to stdout so we get both in the file.

For anyone not familiar with >>, I’m using that to append output to the named file. If I used > it would clobber the file each time the script runs, so I prefer appending with >>.


I had originally wanted to do my logging to systemd (journalctl) so that it was consistent with other system logging and leverage existing cleanup rotation. Unfortunately the systemd-cat command is not available in the BusyBox environment we are executing under when homeassistant runs our script.

For anyone interested in what that would look like:

#simple trick to redirect stdout and stderr to systemd (journalctl):
{
  #main script content goes here
} 2>&1 | systemd-cat -t manage_snapshots.sh

To view the logs in SSH or via the console you would use:

journalctl -t manage_snapshots.sh

I created an issue for the secrets leak and escaped line breaks: #37705

Hi Keith
Any suggestion about failed mounting command (Permission denied)
My setup is the latest Hassio 4.11 with a /mnt/smb folder
My NAS is on 192.168.0.18 using user/password and has a /Volume_1 folder

The commande I use is

mount -t cifs -o “username=someone,password=something,workgroup=WORKGROUP” //192.168.0.18/Volume_1 /mnt/smb

the error I got is

mount: mounting //192.168.0.18/Volume_1 on /mnt/smb failed: Permission denied

Thanks in advance

Have you tried connecting to that smb share with another computer to make sure it’s working for that user & password? If that’s working fine, maybe try adding “vers=3.0,” to the start of your -o in case your default isn’t capable of the security scheme used by the smb share. Beyond that, I don’t have any ideas.