There’s 2 things I’ve learned in all the time I’ve been using computers (which is more than a couple of decades):
There’s always more to learn
However many ways you think there are of doing anything, like backups, somebody else will eventually find another option that’s equally valid
There are 2 things to do for backing up (and recovering) your HA system, IMO:
Keep a bootable backup of the HA install to enable a quick recovery
Back up the configuration file(s) regularly so you can recover from mistakes, even if it takes a week, or longer, to notice them
The way I’m tackling it is:
For the first, I’ve started with rpi-clone. It isn’t perfect, for my purposes, but I can easily make the changes I want so that it operates without intervention. It uses dd to create an initial image, then future runs use rsync to copy only the changes. I’ll use that periodically so I’ve got an “instant recovery” option, probably weekly while I’m evolving my configuration rapidly, dropping off to monthly or so eventually. I’m using a USB micro SD adapter for this purposes, with a second SD card. I will add a second pair of these later, so that the backups can alternate. That way if things become corrupted during a backup I don’t lose it all.
I already use rsnapshot on my network, with another Pi 3 acting as my backup server. I’ve simply added my HA system to the configuration there. I can apply those backups to any freshly built Raspbian Lite install to recover with a little effort. This ensures that I’m backing up my configuration file every 3 hours, and I’ll build up a rolling history of my changes that expire the way I’ve configured. Primarily I expect to use this to recovery from mistakes editing the configuration files.
As I switched from sqlite to MySQL (to see if that would help performance - it has a little) I’m using Percona Xtrabackup to back up my MySQL database. That drops the backups in a location that’s backed up by rsnapshot, and by my cloud backup scripts (below).
Since I’m using the B2 cloud backup service, I’ve installed rclone and configured it to back up any changes to my HA install hourly. That also supports versions, so I’ll be able to recover my configuration from any hourly backup - currently I’ve not sent any expiry so they’ll be around “forever”. This I’m using so that if the worst happens and my computers are lost (fire/theft/whatever) I can still recover everything that matters - adding my HA config is a trivial overhead to what I’m already backing up.
Yes, SSH keys should always be created as the user that’ll be using them. You don’t strictly need to, but it ensures that the files are in the right location on your SSH client (which in this case is your HA server).
Thank you, sir, again, I appreciate the time you are investing and the patience you have demonstrated. I wish this could be stickied as it is chock full of good information.
You have given me a ton to work with. My hope is to at some point have a rack mountable server. Hopefully with the space to mount my Cisco gear and some kind of UPS capacity. I am going to be adding some IP cameras so I need PoE as well.
So while all of this is still in its smaller scale, I’m trying to plan on how to recover if needed. I think before I get too much deeper into this I will at least spend some time learning rsnapshot. It seems simple enough to implement so I can at least start there.
Just for completeness’s sake I’ll post my backup strategy:
I use crashplan to backup to the cloud, I click to add the directory and then it’s versioned and keeps 30 days worth of historical changes and the latest version forever.
This way all I have is my configuration of home-assistant/ha-bridge and the configuration of the docker stuff (which is on github and versioned). Restoring is simple. even if all my stuff fails, a clean OS install, tell crashplan to restore from an existing system and choose the old one, and click restore on the files and I’m back.
I want to unmount my Hard Drive (which is connected to my Raspberry Pi) with a simple shell_command through Home Assistant:
umount /dev/sda1
It works perfectly if I do it through terminal and it also works perfectly if I change to virtual enviroment in HASS with source /srv/hass/hass_venv/bin/activate and run the same command there.
It doesn’t work through script (unmount), it doesn’t work in single or double quotes, with eject or umount. There is no error in log in HASS, nothing happens.
Any help would be really appreciated, I don’t know what I am missing.
It probably is because, generally, umount requires root privileges. You can change that, for any given mount, by adding user to the options field in /etc/fstab. For example:
/dev/sda1 /your/mountpoint ext4 user,noauto 0 0
This assumes you’re using ext4 as the file system on /dev/sda1.
Oh, and the eject command will only be needed for items like optical drives where there’s a physical eject button.
Thank you for your reply @Tinkerer. I want to unmount an external HDD, which is in NTFS file system.
I assume that by user you mean user hass (in my case, because I installled HA with All-In-One Installer)?
If I run df command, I see my HDD (yes it’s almost full ):
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 976728060 974562552 2165508 100% /media/pi/My Book
If I run sudo nano /etc/fstab my HDD doesn’t appear there. My fstab looks like this:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
How can I add user hass to the option field in /etc/fstab ? Would be adding an extra line as you have written in your post correct?
Spaces in /etc/fstab are tricky, here it’s been replaced by \040, which is the (octal) code for a space character.
The word user here is a literal, and says that any user can mount (and unmount) that partition. You’ll probably need to ensure that the user hass owns /media/pi/My Book too. I’m not entirely certain because I’ve nothing to experiment with right now.
Ok, I edited the fstab. I renamed the drive to be without spaces, now it is MyBook only:
Fstab looks like this now:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
/dev/sda1 /media/pi/MyBook ntfs noauto,user 0 0
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
Something interesting happened. In terminal a simple command umount /dev/sda1 no longer works, but with sudo works as intended.
In HASS still nothing happens, nothing in log etc. I added a command_line switch with command_on as sudo umount /dev/sda1 and it is at least properly logged as failed
Good day, looking for some guidance. Im trying to get HA to run this bash. Have tried all the possibilities listed under the bash with no luck. Any guidance is much appreciated https://hastebin.com/bikakexumo.bash
Does it work from the command line inside the venv (or however you’re running HA?
What errors do you get, if any?
Are the entries in your sudoers file commented out, or is that a paste error?
You are not overriding the command ‘ln’ in your sudoers file.
You are not overriding the command ‘rm’ in your sudoers file.
You are not overriding the command ‘systemctl restart’ in your sudoers file.
You are overriding files, not commands in your sudoers file.
Your entire script seems to be to copy the pem files from one place to another and tehn create symbollic links elsewhere, why not just reference the correct locations in HA (and anywhere else) directly and save the bother?
@anon43302295 thank you for your response.
I can run it from the command line as my own user.
I’ve tried running it from HA dev UI, as I cant use the HA user because the installation comes password protected and I dont have the password.
The error Im getting is Error running command: shellinabox_certificate, return code: 1
5:15 PM /srv/homeassistant/lib/python3.6/site-packages/homeassistant/components/shell_command.py (ERROR)
I commented it out for now because it’s not working and didnt want to leave it “half-baked”
I want to try overrinding, “ln”, so i assumed it’s “link”, but have not been able to find the directory location for “rm” and “cat”. systemctl is overriden in another sudoers line.
Maybe you can point me on how to correctly override the relevant commands.
That’s correct. Basically I have shellinabox installed (webterminal). in order to see it within the HA UI i need to use SSL certificates because im on HTTPS. shellinabox uses 1 combined certificate including prem & key. So my bash creates a combined certificate in the Downloads folder and a simlink to the default shellinabox certificate.
I would like to fire an automation that runs this bash when the cerficates are updated after 90days.
Any suggestions around this way are welcome.
Not sure if there’s a way to keep the dummy (combined cert) file continuously linked to the base lets-encrypt certs?
When you say “the installation comes password protected and I dont have the password.” what do you mean by this?
Generally speaking people run homeassistant on an unprivileged user that specifically doesn’t have a password for security, when we need to use privileged commands we use other methods, but I don’t want to go down that path unless that’s correct for your installation.
thank you again. Im running ubuntu 17.10 on a NuC. I used this guide for the installation
when i su homeassistant it asks for a password. I can sudo su and the su homeassistant, but when i try to run a command requiring higher privileges it asks for a password.
I dont have the password. I presume it comes with he installation.
Thank you: got this:
hass@Home:~$ sudo su -s /bin/bash homeassistant
homeassistant@Home:/home/hass$ cd /usr/local/bin
homeassistant@Home:/usr/local/bin$ ./shellinabox_certificate.sh
[sudo] password for homeassistant:
when I hit ctrl c to exit, got this
./shellinabox_certificate.sh: line 4: /home/hass/Downloads/certificate-juan11perez.duckdns.org.pem: Permission denied