Once I got my shell_command script to delete snapshots older than 7 days so my disk doesn’t fill up, I spent some time figuring out how to sync my snapshots to a Windows network share. It wasn’t too bad, once I figured out how to mount an SMB share. Everything I’ve done is available in HASSIO “out of the box”, meaning I didn’t have to install anything.
Essentially, we just need to mount the local device holding the snapshots and mount the SMB network share, then run a loop to do the sync and umount both when done.
Mounting the local device:
mkdir /mnt/data
mount /dev/sda8 /mnt/data
Mounting the Windows network share:
mkdir /mnt/smb
mount -t cifs -o "username=xxxx,password=xxxx" //192.168.1.10/share-name /mnt/smb
To keep the username and password out of the script and out of configuration.yaml I leveraged secrets.yaml for the shell_command to be called.
configuration.yaml
shell_command:
manage_snapshots: !secret snapshot_command
secrets.yaml
snapshot_command: "/config/tools/manage_snapshots.sh //192.168.1.10/share-name redacted-username redacted-password"
In that command, use your appropriate IP or hostname, share name, username and password.
IMPORTANT: if you have debug logging enabled, homeassistant will log the command line that is called when there is output generated from the command, so your secrets are still not completely safe with this approach. You can change the script to not generate any output if you need to avoid leaking secrets in the homeassistant log when you have debug logging enabled.
It’s also worth pointing out are how my script gets the friendly name of the snapshot and determines the age of the snapshot. The friendly name is important because that’s how I distinguish automated snapshots vs. manual snapshots (I have “automated” in the friendly name).
Get friendly name of snapshot without extracting to filesystem:
NAME=`tar xfO abcdef012.tar ./snapshot.json | jq .name`
The magic of the above statement is extracting snapshot.json from the tar directly to stdout and piping it to jq to get the name value. BTW, jq is powerful JSON manipulation if you haven’t heard of it.
Get age of snapshot file:
AGE="$((($(date +%s) - $(date +%s -r abcdef012.tar))/86400))"
The above statement takes today’s date in seconds and subtracts the file date in seconds, then divides that by the number of seconds in a day.
Now for the full script.
This is exactly what I have running in my system. It is a super-set of the original cleanup script in my previous post with some additional handling for existing folders and if the mounts already exist. I wrapped some of the logic into functions to make the main logic easier to follow.
I’m doing an “echo” sync, which is to ensure remote mirrors local. If you use this script, be aware that any tar files on remote that don’t exist on local will be deleted. Obviously you can change the behavior to suit your needs.
There are two loops in this script. The first loop checks local tar files to see if they need to be copied to remote, or are too old and need to be deleted from local and remote. The second loop checks remote tar files to delete any that don’t exist locally.
If you’ve read my other post for the auto-delete script, you may notice that I’m using ls -t $BACKUP/*.tar
for my local file loop instead of the find
command with -mtime +7
. This is for 2 reasons. First, I have to loop all the files, not just the old ones, so I can copy new ones to the remote share. Second, I’m using ls -t
to sort the files by date so that my log shows them in order. I just didn’t like the default alphabetic sort making the ages out of order in my output.
/config/tools/manage_snapshots.sh
#!/bin/bash
echo "Managing snapshots: retaining manual, deleting automated > 7 days ago and echoing changes to remote storage."
SMB=$1
USR=$2
PWD=$3
DISK="/dev/sda8"
MOUNT1="/mnt/data"
MOUNT2="/mnt/smb"
BACKUP="$MOUNT1/supervisor/backup"
# this function makes directory if missing, or aborts the script on failure
make_dir () {
if [ ! -d "$1" ]; then
echo " # creating $1 folder for mounting"
mkdir "$1"
if [ $? -ne 0 ]; then
echo " ! failed to create $1 folder"
exit 1
fi
fi
}
# this function mounts a local device if not already mounted, or aborts the script on failure
mount_dev () {
make_dir "$1" || exit 1
if ! mountpoint -q "$1"; then
echo " # mounting $2 at $1"
mount "$2" "$1"
if [ $? -ne 0 ]; then
echo " ! failed to mount $2 at $1"
exit 1
fi
fi
}
# this function mounts a windows network share (SMB) if not already mounted, or aborts the script on failure
mount_smb () {
make_dir "$1" || exit 1
if ! mountpoint -q "$1"; then
echo " # mounting $2 at $1"
mount -t cifs -o "vers=3.0,username=$3,password=$4" "$2" "$1"
if [ $? -ne 0 ]; then
echo " ! failed to mount $2 at $1"
exit 1
fi
fi
}
# this function will unmount any mount point if it's a valid mount
umount_any () {
if mountpoint -q "$1"; then
echo " # unmounting $1"
umount "$1"
fi
# I decided not to bother deleting mount points. It's not really necessary because they will get re-used.
# if [ -d "$1" ] && ! mountpoint -q "$1"; then
# echo " # removing mount point $1"
# rmdir "$1"
# fi
}
mount_dev "$MOUNT1" "$DISK"
mount_smb "$MOUNT2" "$SMB" "$USR" "$PWD"
#review local files to delete old automated snapshots and echo new/delete
for i in `ls -t $BACKUP/*.tar`
do
NAME=`tar xfO $i ./snapshot.json | jq .name`
AGE="$((($(date +%s) - $(date +%s -r $i))/86400))"
REMOTE="$MOUNT2/$(basename $i)"
if [[ $NAME == *Automated* && $AGE -gt 7 ]]; then
echo " - deleting $(basename $i) from $AGE days ago -> $NAME"
rm $i
if [ -f "$REMOTE" ]; then
echo " - echoing deletion of $(basename $i) to remote storage"
rm $REMOTE
fi
else
echo " + keeping $(basename $i) from $AGE days ago -> $NAME"
if [ ! -f "$REMOTE" ]; then
echo " > echoing $(basename $i) to remote storage"
cp $i $REMOTE
fi
fi
done
#review remote files to delete obsolete remote files we no longer have locally
for i in `ls -t $MOUNT2/*.tar`
do
LOCAL="$BACKUP/$(basename $i)"
if [ ! -f "$LOCAL" ]; then
echo " - echoing deletion of $(basename $i) to remote storage"
rm $i
fi
done
umount_any "$MOUNT1"
umount_any "$MOUNT2"