Looks like my backup ran but I cant tell if the clean-up service is working because I’m nowhere near my maximum number of files (20) yet as I moved hosts recently.
I have updated the gist to match the new integration patterns. I have also added a description of how it should be organized. Sorry about the delay in getting this updated I finally got the chance to upgrade today.
Thanks for the reminded I have updated the gist
I’m actually working on converting it to one. Just haven’t had much time lately hopefully this weekend.
Repo has been created to support HACS. Here is the repo I have tested this on my instance manually. Just need to ensure the automation continues to work. Once I know it works I might try to get it added to the default list.
@tom_l My automation is still working. Is your automation still functioning after using the HACS plugin? If so I’ll work on checking to see if they are willing to have it in the default list.
I’m not yet using it under hacs.
I’ll give it a go and let you know tomorrow.
I was having the same issue trying to delete old snapshots using shell_command. After a year of being annoyed about it, I finally got it working today. Along the way I figured out a few important things I feel are worth sharing to the community. Fully functional script at the end.
-
shell_command runs in the homeassistant container if you are running HASSIO. I’m not sure about other installation types, but probably consistent. The mounted drives are different in this container vs. the host system that you can access via the developer SSH on port 22222. Keep that in mind when testing via SSH. You’ll want to run a shell inside the homeassistant container to test from or else the paths won’t be right if you’re on the host.
-
An important thing to know about the homeassistant container is that the backups are not mounted, so you have to mount them in your script to access them. In my case, with full HASSIO, I mounted
/dev/sda8
on/mnt/data
and the snapshots are at/mnt/data/supervisor/backup
. You can mount wherever you want. I chose that location to be consistent with where the host mounts it so I don’t have to remember different paths depending on where I’m logged in. -
Although the backup folder wasn’t mounted for us in the homeassistant container, /config, /share and /ssl are available. The addons folder is not mounted either, but mounting the drive ourselves gives us that too, if you need it for some reason.
-
Note that all of those folders are technically subfolders on
/dev/sda8
, so when we mounted that drive to/mnt/data
they are also accessible through that mount. For example,/config
and/mnt/data/supervisor/homeassistant
are the same folder on the disk. The other folders are also under the supervisor folder, but their names are consistent (share, ssl, backup, addons). I find it odd that the config folder is called homeassistant on the disk, but whatever. -
a limitation I ran into with shell_command is that it doesn’t like to run multiple commands, nor does it handle for loops. I think it’s a limitation of the compact shell that runs the command (BusyBox). I worked around this limitation by making a bash script to do all the work and the shell_command just calls my script. I created my bash script in my config folder so that it’s part of my backups.
-
my bash script automates creating a temporary mount folder, mounting the drive, looping the available *.tar files over 7 days old and deciding which ones to keep or delete, then unmounting and removing the temporary mount folder. Aside from age, the decision to keep or delete requires examining the friendly name of the snapshot to determine if it’s one of the automated backups because I want to retain all manual backups regardless of age. I found a way to get the friendly name from the snapshot.json file inside the tar file without having to write it to the filesystem, so it’s nice and tidy.
Here is my shell script, for anyone interested:
#!/bin/bash
echo "Cleaning up automated snapshots older than 7 days. Retaining manual backups of any age."
mkdir /mnt/data
mount /dev/sda8 /mnt/data
for i in `find /mnt/data/supervisor/backup/*.tar -mtime +7`
do
NAME=`tar xfO $i ./snapshot.json | jq .name`
AGE="$((($(date +%s) - $(date +%s -r $i))/86400))"
if [[ $NAME == *Automated* ]]; then
echo " - removing $(basename $i) from $AGE days ago ($NAME)"
rm $i
else
echo " + keeping $(basename $i) from $AGE days ago ($NAME)"
fi
done
umount /mnt/data
rmdir /mnt/data
To call it, I have this in my configuration.yaml
:
shell_command:
delete_old_backups: '/config/tools/clean_snapshots.sh'
I have used the HACS integration and the installation went fine. Quick and no surprises.
I have made a small automation for a weekly clean-up and that runs fine as well. I make a snapshot every day and clean up once a week. However with no errors or notifications the clean-up is not happening. This is in the config.yaml:
# *** Snapshot Clean Up Service Integration ***
clean_up_snapshots_service:
host: http://homeassistant.local:8123
token: eyJ0eXAiOiJKV1QiLC...
number_of_snapshots_to_keep: 3
(I have cut the token in this edit)
I see my number of snapshot growing everyday but the clean-up doesn’t go. I have triggered the automation manually but to no effect. I can’t figure out. The snapshots are in the directory Backup on the root which is used by the ‘make snap shot service’:
I have the impression that the clean-up service is looking at the wrong directory but I cant find where its going wrong. You have any clues for me?
Is there anything in the logs stating that an error is occuring? The plug uses the built in hassio functions to delete the backups so it doesn’t specify the location of the backups.
I have also not attempted with the homeassistant.local url myself. I think there might be code that is setting it to https if the url is not an IP which may be causing the problem Probably the line giving you issues.
Hi Tom.
My logs don’t show any errors. The trigger in the automation happens in time but further nothing. So I think something in the path
I will change it to IP first.
Ok. If that does fix it. I will work on a solution for not using ssl when the url scheme doesn’t have https
IP worked. So that solved it. Just to remember if you reallocate your host to reconfigure the IP address.
Otherwise a very nice service. Thank you !!!
I’ve created a new release which should have a fix in for using “http” urls.
Thanks for this code, it works perfectly to remove backups, no need to do it manually