Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sna$
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
user hass = (root) NOPASSWD: /usr/bin/harpi3rsync.sh
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
Here are the two script files:
$ ls -l /usr/bin/harpi3rsync.sh
-rwxr-xr-x 1 root root 176 Jan 9 17:49 /usr/bin/harpi3rsync.sh
Which contains:
#!/bin/bash
# Sync/Backup the Home Assistant Config directory
rsync -azh -e ssh --delete /home/hass/.homeassistant/ [email protected]:/mnt/usb_1/FileSync/AllFiles/HomeAssistant/
And
$ ls -l /usr/bin/harpi3ddimg.sh
-rwxr-xr-x 1 root root 195 Jan 9 06:37 /usr/bin/harpi3ddimg.sh
Which contains:
#!/bin/bash
# Make a bit-for-bit image of the RaspberryPi hosting Home Assistant
ssh [email protected] dd if=/dev/mmcblk0 of=/mnt/usb_1/FileSync/AllFiles/HARPi3_SD_Backup_$(date +%Y%m%d).img bs=1M
And yet they are not running when called from within HA. I donāt know what I have done wrong but I apparently suck like an industrial strength Hoover Vacuum cleaner at running shell script from within HA.
So, first thing; you can specify multiple commands to allow on a single line, or you can make a duplicate, both should work fine.
The first error: in the HA config file you need to call sudo /usr/bin/harpi3rsync.sh not just /usr/bin/harpi3rsync.sh or it runs as hass user not root
Secondly, lets take HA out of the equation; we know it works when you run an echo script, so that is fine, it just makes troubleshooting harder. change to your hass user, and run your script, make sure it does what you want before bringing HA back in to the picture.
as the hass user, do you get an error when you run sudo /usr/bin/harpi3rsync.sh
@justin8 I appreciate your help but more importantly your patience. Thank you. This has just been one heck of a frustrating issue all around.
Now, that said, I think I found the issue, though not sure what to do about it. I changed to the hass user:
$ sudo su -s /bin/bash hass
It then is asking for a password:
hass@HARPi3:/usr/bin$ sudo /usr/bin/harpi3rsync.sh
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for hass:
Which I do not know. So I tried to run it directly and it asks me the password for the other Pi, which I do know and then I got this:
hass@HARPi3:/usr/bin$ ./harpi3rsync.sh
[email protected]'s password:
rsync: send_files failed to open "/home/hass/.homeassistant/shell_commands/harpi3ddimg.sh": Permission denied (13)
rsync: send_files failed to open "/home/hass/.homeassistant/shell_commands/harpi3rsync.sh": Permission denied (13)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
So, what is the lesson here? I think the lesson is this thing hates me! LOL.
Seriously, though, I think the better course of action may be to just freakinā stick a USB stick in the Pi running HA and call it good, well I hope anyway. I canāt wait to see what challenges stem from that. One would hope it would be an easier thing, but I am learning not to expect that.
To that end, let me try that and see if I can get THAT to work and then if you think there is something else @justin8 that I can do to get this method working I will continue to troubleshoot that. However, I have taken enough time from people on here on this issue and in many ways would rather just get this done so I have the experience and lesson from it so that perhaps I can do as you and try helping others.
So if itās asking you for a password when using sudo, there is something wrong with the sudoers config line.
The reason it had errors reading the files is because the second time you ran as the hass user, which I assume doesnāt have permission to access some files it doesnāt own, and it asked for the password because you donāt have the ssh key in the hass account, but root.
Iām at work and donāt have time right now to look up the correct sudoers syntax, but that is where your current issue is.
Okay, well I canāt express just how grateful I am for your help, it is very much appreciated. The only reason I wanted to send it over to the other Pi that serves as a file server, is because it has a 120Gb SSD attached to it so it has room for the SD card images.
And you are correct about the SSH keys. I was not really thinking of running shell commands in HA when I set that up. Perhaps a bit short sighted of me.
Quick question, is there any reason you have/want to run this from HA. Given that youāre trying to run them as root, simply calling them from cron (scheduled commands) might be easier. Alternatively, the way I do it, is to run rsnapshot on the remote host and pull the backups.
As for your sudoers entry, the problem is that you use user hass=(root) when you should just have hass ALL=(root). The first field is the username, the second is the hostname the entry applies to.
I canāt say that there is a particular, critical reason I want to do so, other than the fact the SD card images are going to be large and so I wanted it as an on-demand thing so I could make sure that there was enough space to do the backup.
Additionally, I did not want the configuration directory on a scheduled backup so that I could run that on demand as well. The thinking being that if I goofed up my configuration somehow or an automation, or just could not figure out what I did wrong and wanted to go back to a known good configuration, then I could restore from the backup. However, if itās on a schedule and it happens to run a backup and I need the last known good configuration, then I am out of luck because it just got updated.
Itās just convenient to have it in HA because if I know I am getting ready to make a lot of changes or want to try a few things out, experiment, etc, I can just click the link, back up the configuration and the image and then play to my hearts content. Yes, you can do that from the command line or by running a shell script, but I saw others doing very similar things within HA and it seemed a good way to proceed.
For versioning, look at rsnapshot. Depending on how you configure the retention you can then easily recover the working configuration from last Tuesday, or August, orā¦ Iād never recommend a backup solution that over-wrote the previous backup - if that fails during a backup youāre left with nothing after all
With that said, as justin8 said, put both lines into a single shell script and call that with sudo. Youāll likely need to also need to explicitly specify the SSH keys.
Iād be tempted to suggest too that the second line (ssh root@...)is a script on the remote host, since thatās where itās running. Then you can run the rsync as the hass user, which then runs sudo on the remote host for the dd:
On HA, create a new SSH keypair with no passphrase:
ssh-keygen -t ed25519
Copy the contents of ~hass/.ssh/id_ed25519.pub to ~hass/.ssh/authorized_keys on 10.0.0.20.
@Tinkerer thank you and @justin8 for all of the excellent information that you have provided. Simply amazing. Just when I think I am comfortable with most tasks in Linux, I come across something like this that reminds me that my knowledge is just a molecule in an ocean of elements.
That said, I take it then, that I am perhaps going about this perhaps either the long way or the wrong way. Oh, and just to clarify, I would not have over-written the dd images as they were dated, but I can see your point with the rsyncāng of the configuration files.
I have never used any kind of versioning before, well other than just some kind of basic date separation, so I am in new territory here. My thinking is that my drive to make dd images is a bit much and I would probably be better off just setting up a new RPi SD card complete with HA installed and keep it in a safe place and then if needed just copy the configuration directory over. Perhaps manually make new images off the existing run if any major changes are made in the software or something like that. Is that perhaps a better way?
Then use the rsnapshot to keep the backups of the configuration directory since that is a much smaller footprint in comparison and easier to maintain. I just want to get a solid backup strategy in place so that I can restore this thing pretty quickly if needed.
EDIT:
With respect to the SSH keys, do those need to be created as the hass user?
Thereās 2 things Iāve learned in all the time Iāve been using computers (which is more than a couple of decades):
Thereās always more to learn
However many ways you think there are of doing anything, like backups, somebody else will eventually find another option thatās equally valid
There are 2 things to do for backing up (and recovering) your HA system, IMO:
Keep a bootable backup of the HA install to enable a quick recovery
Back up the configuration file(s) regularly so you can recover from mistakes, even if it takes a week, or longer, to notice them
The way Iām tackling it is:
For the first, Iāve started with rpi-clone. It isnāt perfect, for my purposes, but I can easily make the changes I want so that it operates without intervention. It uses dd to create an initial image, then future runs use rsync to copy only the changes. Iāll use that periodically so Iāve got an āinstant recoveryā option, probably weekly while Iām evolving my configuration rapidly, dropping off to monthly or so eventually. Iām using a USB micro SD adapter for this purposes, with a second SD card. I will add a second pair of these later, so that the backups can alternate. That way if things become corrupted during a backup I donāt lose it all.
I already use rsnapshot on my network, with another Pi 3 acting as my backup server. Iāve simply added my HA system to the configuration there. I can apply those backups to any freshly built Raspbian Lite install to recover with a little effort. This ensures that Iām backing up my configuration file every 3 hours, and Iāll build up a rolling history of my changes that expire the way Iāve configured. Primarily I expect to use this to recovery from mistakes editing the configuration files.
As I switched from sqlite to MySQL (to see if that would help performance - it has a little) Iām using Percona Xtrabackup to back up my MySQL database. That drops the backups in a location thatās backed up by rsnapshot, and by my cloud backup scripts (below).
Since Iām using the B2 cloud backup service, Iāve installed rclone and configured it to back up any changes to my HA install hourly. That also supports versions, so Iāll be able to recover my configuration from any hourly backup - currently Iāve not sent any expiry so theyāll be around āforeverā. This Iām using so that if the worst happens and my computers are lost (fire/theft/whatever) I can still recover everything that matters - adding my HA config is a trivial overhead to what Iām already backing up.
Yes, SSH keys should always be created as the user thatāll be using them. You donāt strictly need to, but it ensures that the files are in the right location on your SSH client (which in this case is your HA server).
Thank you, sir, again, I appreciate the time you are investing and the patience you have demonstrated. I wish this could be stickied as it is chock full of good information.
You have given me a ton to work with. My hope is to at some point have a rack mountable server. Hopefully with the space to mount my Cisco gear and some kind of UPS capacity. I am going to be adding some IP cameras so I need PoE as well.
So while all of this is still in its smaller scale, Iām trying to plan on how to recover if needed. I think before I get too much deeper into this I will at least spend some time learning rsnapshot. It seems simple enough to implement so I can at least start there.
Just for completenessās sake Iāll post my backup strategy:
I use crashplan to backup to the cloud, I click to add the directory and then itās versioned and keeps 30 days worth of historical changes and the latest version forever.
This way all I have is my configuration of home-assistant/ha-bridge and the configuration of the docker stuff (which is on github and versioned). Restoring is simple. even if all my stuff fails, a clean OS install, tell crashplan to restore from an existing system and choose the old one, and click restore on the files and Iām back.
I want to unmount my Hard Drive (which is connected to my Raspberry Pi) with a simple shell_command through Home Assistant:
umount /dev/sda1
It works perfectly if I do it through terminal and it also works perfectly if I change to virtual enviroment in HASS with source /srv/hass/hass_venv/bin/activate and run the same command there.
It doesnāt work through script (unmount), it doesnāt work in single or double quotes, with eject or umount. There is no error in log in HASS, nothing happens.
Any help would be really appreciated, I donāt know what I am missing.
It probably is because, generally, umount requires root privileges. You can change that, for any given mount, by adding user to the options field in /etc/fstab. For example:
/dev/sda1 /your/mountpoint ext4 user,noauto 0 0
This assumes youāre using ext4 as the file system on /dev/sda1.
Oh, and the eject command will only be needed for items like optical drives where thereās a physical eject button.
Thank you for your reply @Tinkerer. I want to unmount an external HDD, which is in NTFS file system.
I assume that by user you mean user hass (in my case, because I installled HA with All-In-One Installer)?
If I run df command, I see my HDD (yes itās almost full ):
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 976728060 974562552 2165508 100% /media/pi/My Book
If I run sudo nano /etc/fstab my HDD doesnāt appear there. My fstab looks like this:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
How can I add user hass to the option field in /etc/fstab ? Would be adding an extra line as you have written in your post correct?
Spaces in /etc/fstab are tricky, here itās been replaced by \040, which is the (octal) code for a space character.
The word user here is a literal, and says that any user can mount (and unmount) that partition. Youāll probably need to ensure that the user hass owns /media/pi/My Book too. Iām not entirely certain because Iāve nothing to experiment with right now.
Ok, I edited the fstab. I renamed the drive to be without spaces, now it is MyBook only:
Fstab looks like this now:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
/dev/sda1 /media/pi/MyBook ntfs noauto,user 0 0
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
Something interesting happened. In terminal a simple command umount /dev/sda1 no longer works, but with sudo works as intended.
In HASS still nothing happens, nothing in log etc. I added a command_line switch with command_on as sudo umount /dev/sda1 and it is at least properly logged as failed