Running Shell Commands

Okay, this is just getting down right ridiculous. Here is my configuration file:

shell_command:
  backup_config: /usr/bin/harpi3rsync.sh
  backup_image: /usr/bin/harpi3ddimg.sh

script:
  backup_config:
    alias: Back Up Configuration
    sequence:
      - service: shell_command.backup_config
  backup_image:
    alias: Image SD Card
    sequence:
      - service: shell_command.backup_image

Here is my sudoers file

Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sna$

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root    ALL=(ALL:ALL) ALL
user hass = (root) NOPASSWD: /usr/bin/harpi3rsync.sh

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo   ALL=(ALL:ALL) ALL

# See sudoers(5) for more information on "#include" directives:

#includedir /etc/sudoers.d

Here are the two script files:

$ ls -l /usr/bin/harpi3rsync.sh
-rwxr-xr-x 1 root root 176 Jan  9 17:49 /usr/bin/harpi3rsync.sh

Which contains:

#!/bin/bash

# Sync/Backup the Home Assistant Config directory
rsync -azh -e ssh --delete /home/hass/.homeassistant/ [email protected]:/mnt/usb_1/FileSync/AllFiles/HomeAssistant/

And

$ ls -l /usr/bin/harpi3ddimg.sh
-rwxr-xr-x 1 root root 195 Jan  9 06:37 /usr/bin/harpi3ddimg.sh

Which contains:

#!/bin/bash

# Make a bit-for-bit image of the RaspberryPi hosting Home Assistant
ssh [email protected] dd if=/dev/mmcblk0 of=/mnt/usb_1/FileSync/AllFiles/HARPi3_SD_Backup_$(date +%Y%m%d).img bs=1M

And yet they are not running when called from within HA. I donā€™t know what I have done wrong but I apparently suck like an industrial strength Hoover Vacuum cleaner at running shell script from within HA.

So, first thing; you can specify multiple commands to allow on a single line, or you can make a duplicate, both should work fine.

The first error: in the HA config file you need to call sudo /usr/bin/harpi3rsync.sh not just /usr/bin/harpi3rsync.sh or it runs as hass user not root

Secondly, lets take HA out of the equation; we know it works when you run an echo script, so that is fine, it just makes troubleshooting harder. change to your hass user, and run your script, make sure it does what you want before bringing HA back in to the picture.

as the hass user, do you get an error when you run sudo /usr/bin/harpi3rsync.sh

1 Like

@justin8 I appreciate your help but more importantly your patience. Thank you. This has just been one heck of a frustrating issue all around.

Now, that said, I think I found the issue, though not sure what to do about it. I changed to the hass user:

$ sudo su -s /bin/bash hass

It then is asking for a password:

hass@HARPi3:/usr/bin$ sudo /usr/bin/harpi3rsync.sh

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for hass:

Which I do not know. So I tried to run it directly and it asks me the password for the other Pi, which I do know and then I got this:

hass@HARPi3:/usr/bin$ ./harpi3rsync.sh
[email protected]'s password:
rsync: send_files failed to open "/home/hass/.homeassistant/shell_commands/harpi3ddimg.sh": Permission denied (13)
rsync: send_files failed to open "/home/hass/.homeassistant/shell_commands/harpi3rsync.sh": Permission denied (13)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]

So, what is the lesson here? I think the lesson is this thing hates me! LOL.
Seriously, though, I think the better course of action may be to just freakinā€™ stick a USB stick in the Pi running HA and call it good, well I hope anyway. I canā€™t wait to see what challenges stem from that. One would hope it would be an easier thing, but I am learning not to expect that.

To that end, let me try that and see if I can get THAT to work and then if you think there is something else @justin8 that I can do to get this method working I will continue to troubleshoot that. However, I have taken enough time from people on here on this issue and in many ways would rather just get this done so I have the experience and lesson from it so that perhaps I can do as you and try helping others.

So if itā€™s asking you for a password when using sudo, there is something wrong with the sudoers config line.

The reason it had errors reading the files is because the second time you ran as the hass user, which I assume doesnā€™t have permission to access some files it doesnā€™t own, and it asked for the password because you donā€™t have the ssh key in the hass account, but root.

Iā€™m at work and donā€™t have time right now to look up the correct sudoers syntax, but that is where your current issue is.

Okay, well I canā€™t express just how grateful I am for your help, it is very much appreciated. The only reason I wanted to send it over to the other Pi that serves as a file server, is because it has a 120Gb SSD attached to it so it has room for the SD card images.

And you are correct about the SSH keys. I was not really thinking of running shell commands in HA when I set that up. Perhaps a bit short sighted of me.

Quick question, is there any reason you have/want to run this from HA. Given that youā€™re trying to run them as root, simply calling them from cron (scheduled commands) might be easier. Alternatively, the way I do it, is to run rsnapshot on the remote host and pull the backups.

As for your sudoers entry, the problem is that you use user hass=(root) when you should just have hass ALL=(root). The first field is the username, the second is the hostname the entry applies to.

I canā€™t say that there is a particular, critical reason I want to do so, other than the fact the SD card images are going to be large and so I wanted it as an on-demand thing so I could make sure that there was enough space to do the backup.

Additionally, I did not want the configuration directory on a scheduled backup so that I could run that on demand as well. The thinking being that if I goofed up my configuration somehow or an automation, or just could not figure out what I did wrong and wanted to go back to a known good configuration, then I could restore from the backup. However, if itā€™s on a schedule and it happens to run a backup and I need the last known good configuration, then I am out of luck because it just got updated.

Itā€™s just convenient to have it in HA because if I know I am getting ready to make a lot of changes or want to try a few things out, experiment, etc, I can just click the link, back up the configuration and the image and then play to my hearts content. Yes, you can do that from the command line or by running a shell script, but I saw others doing very similar things within HA and it seemed a good way to proceed.

For versioning, look at rsnapshot. Depending on how you configure the retention you can then easily recover the working configuration from last Tuesday, or August, orā€¦ Iā€™d never recommend a backup solution that over-wrote the previous backup - if that fails during a backup youā€™re left with nothing after all :wink:

With that said, as justin8 said, put both lines into a single shell script and call that with sudo. Youā€™ll likely need to also need to explicitly specify the SSH keys.

Iā€™d be tempted to suggest too that the second line (ssh root@...)is a script on the remote host, since thatā€™s where itā€™s running. Then you can run the rsync as the hass user, which then runs sudo on the remote host for the dd:

On HA, create a new SSH keypair with no passphrase:

ssh-keygen -t ed25519

Copy the contents of ~hass/.ssh/id_ed25519.pub to ~hass/.ssh/authorized_keys on 10.0.0.20.

#!/bin/bash
rsync -azh -e ssh --delete /home/hass/.homeassistant/ [email protected]:/mnt/usb_1/FileSync/AllFiles/HomeAssistant/
ssh [email protected] /foo/bar/raspi-clone

On 10.0.0.20 make sure that /mnt/usb_1/FileSync/AllFiles/HomeAssistant/ is owned by the hass account then create your script /foo/bar/raspi-clone:

#!/bin/bash
sudo /bin/dd if=/dev/mmcblk0 of=/mnt/usb_1/FileSync/AllFiles/HARPi3_SD_Backup_$(date +%Y%m%d).img bs=1M

Entry for sudoers on 10.0.0.20:

hass ALL=(root) NOPASSWD: /bin/dd if=/dev/mmcblk0 of=/mnt/usb_1/FileSync/AllFiles/HARPi3_SD_Backup_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].img bs=1M
1 Like

@Tinkerer thank you and @justin8 for all of the excellent information that you have provided. Simply amazing. Just when I think I am comfortable with most tasks in Linux, I come across something like this that reminds me that my knowledge is just a molecule in an ocean of elements.

That said, I take it then, that I am perhaps going about this perhaps either the long way or the wrong way. Oh, and just to clarify, I would not have over-written the dd images as they were dated, but I can see your point with the rsyncā€™ng of the configuration files.

I have never used any kind of versioning before, well other than just some kind of basic date separation, so I am in new territory here. My thinking is that my drive to make dd images is a bit much and I would probably be better off just setting up a new RPi SD card complete with HA installed and keep it in a safe place and then if needed just copy the configuration directory over. Perhaps manually make new images off the existing run if any major changes are made in the software or something like that. Is that perhaps a better way?

Then use the rsnapshot to keep the backups of the configuration directory since that is a much smaller footprint in comparison and easier to maintain. I just want to get a solid backup strategy in place so that I can restore this thing pretty quickly if needed.

EDIT:
With respect to the SSH keys, do those need to be created as the hass user?

Thereā€™s 2 things Iā€™ve learned in all the time Iā€™ve been using computers (which is more than a couple of decades):

  1. Thereā€™s always more to learn :open_mouth:
  • However many ways you think there are of doing anything, like backups, somebody else will eventually find another option thatā€™s equally valid :wink:

There are 2 things to do for backing up (and recovering) your HA system, IMO:

  1. Keep a bootable backup of the HA install to enable a quick recovery
  • Back up the configuration file(s) regularly so you can recover from mistakes, even if it takes a week, or longer, to notice them

The way Iā€™m tackling it is:

  • For the first, Iā€™ve started with rpi-clone. It isnā€™t perfect, for my purposes, but I can easily make the changes I want so that it operates without intervention. It uses dd to create an initial image, then future runs use rsync to copy only the changes. Iā€™ll use that periodically so Iā€™ve got an ā€œinstant recoveryā€ option, probably weekly while Iā€™m evolving my configuration rapidly, dropping off to monthly or so eventually. Iā€™m using a USB micro SD adapter for this purposes, with a second SD card. I will add a second pair of these later, so that the backups can alternate. That way if things become corrupted during a backup I donā€™t lose it all.
  • I already use rsnapshot on my network, with another Pi 3 acting as my backup server. Iā€™ve simply added my HA system to the configuration there. I can apply those backups to any freshly built Raspbian Lite install to recover with a little effort. This ensures that Iā€™m backing up my configuration file every 3 hours, and Iā€™ll build up a rolling history of my changes that expire the way Iā€™ve configured. Primarily I expect to use this to recovery from mistakes editing the configuration files.
  • As I switched from sqlite to MySQL (to see if that would help performance - it has a little) Iā€™m using Percona Xtrabackup to back up my MySQL database. That drops the backups in a location thatā€™s backed up by rsnapshot, and by my cloud backup scripts (below).
  • Since Iā€™m using the B2 cloud backup service, Iā€™ve installed rclone and configured it to back up any changes to my HA install hourly. That also supports versions, so Iā€™ll be able to recover my configuration from any hourly backup - currently Iā€™ve not sent any expiry so theyā€™ll be around ā€œforeverā€. This Iā€™m using so that if the worst happens and my computers are lost (fire/theft/whatever) I can still recover everything that matters - adding my HA config is a trivial overhead to what Iā€™m already backing up.

Yes, SSH keys should always be created as the user thatā€™ll be using them. You donā€™t strictly need to, but it ensures that the files are in the right location on your SSH client (which in this case is your HA server).

1 Like

Thank you, sir, again, I appreciate the time you are investing and the patience you have demonstrated. I wish this could be stickied as it is chock full of good information.

You have given me a ton to work with. My hope is to at some point have a rack mountable server. Hopefully with the space to mount my Cisco gear and some kind of UPS capacity. I am going to be adding some IP cameras so I need PoE as well.

So while all of this is still in its smaller scale, Iā€™m trying to plan on how to recover if needed. I think before I get too much deeper into this I will at least spend some time learning rsnapshot. It seems simple enough to implement so I can at least start there.

Just for completenessā€™s sake Iā€™ll post my backup strategy:

I use crashplan to backup to the cloud, I click to add the directory and then itā€™s versioned and keeps 30 days worth of historical changes and the latest version forever.

For restoring the application itself itā€™s all running in docker using compose: https://github.com/justin8/puppet/blob/master/modules/hass/files/hass/docker-compose.yml

This way all I have is my configuration of home-assistant/ha-bridge and the configuration of the docker stuff (which is on github and versioned). Restoring is simple. even if all my stuff fails, a clean OS install, tell crashplan to restore from an existing system and choose the old one, and click restore on the files and Iā€™m back.

I want to unmount my Hard Drive (which is connected to my Raspberry Pi) with a simple shell_command through Home Assistant:

umount /dev/sda1

It works perfectly if I do it through terminal and it also works perfectly if I change to virtual enviroment in HASS with source /srv/hass/hass_venv/bin/activate and run the same command there.

Here is my shell_command.yaml :

  unmount: '/home/hass/.homeassistant/shell_commands/unmount.sh'
  unmount1: "umount /dev/sda1"
  unmount2: "eject /dev/sda1"

unmount.sh file (only one line):

umount /dev/sda1

It doesnā€™t work through script (unmount), it doesnā€™t work in single or double quotes, with eject or umount. There is no error in log in HASS, nothing happens.

Any help would be really appreciated, I donā€™t know what I am missing.

It probably is because, generally, umount requires root privileges. You can change that, for any given mount, by adding user to the options field in /etc/fstab. For example:

/dev/sda1        /your/mountpoint        ext4        user,noauto        0 0

This assumes youā€™re using ext4 as the file system on /dev/sda1.

Oh, and the eject command will only be needed for items like optical drives where thereā€™s a physical eject button.

Thank you for your reply @Tinkerer. I want to unmount an external HDD, which is in NTFS file system.

I assume that by user you mean user hass (in my case, because I installled HA with All-In-One Installer)?

If I run df command, I see my HDD (yes itā€™s almost full :slight_smile: ):

Filesystem     1K-blocks      Used Available Use% Mounted on

/dev/sda1      976728060 974562552   2165508 100% /media/pi/My Book

If I run sudo nano /etc/fstab my HDD doesnā€™t appear there. My fstab looks like this:

proc            /proc           proc    defaults          0       0
/dev/mmcblk0p1  /boot           vfat    defaults          0       2
/dev/mmcblk0p2  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

How can I add user hass to the option field in /etc/fstab ? Would be adding an extra line as you have written in your post correct?

Your entry would then literally be:

/dev/sda1    /media/py/My\040Book    ntfs    noauto,user    0 0

Spaces in /etc/fstab are tricky, here itā€™s been replaced by \040, which is the (octal) code for a space character.

The word user here is a literal, and says that any user can mount (and unmount) that partition. Youā€™ll probably need to ensure that the user hass owns /media/pi/My Book too. Iā€™m not entirely certain because Iā€™ve nothing to experiment with right now.

Thank you very much. Iā€™ll try it. Maybe Iā€™ll just rename my HDD to get rid of the spaces - one less thing to go wrong.

How do I ensure that user hass owns /media/pi/My Book ?

Type the following at a command line:

sudo chown hass "/media/pi/My Book"

But ensure that it isnā€™t mounted first, so that youā€™re changing the mount point owner.

Ok, I edited the fstab. I renamed the drive to be without spaces, now it is MyBook only:

Fstab looks like this now:

proc            /proc           proc    defaults          0       0
/dev/mmcblk0p1  /boot           vfat    defaults          0       2
/dev/mmcblk0p2  /               ext4    defaults,noatime  0       1
/dev/sda1       /media/pi/MyBook    ntfs    noauto,user    0      0
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

Something interesting happened. In terminal a simple command umount /dev/sda1 no longer works, but with sudo works as intended.

pi@Home-Assistant:~ $ umount /dev/sda1
umount: /media/pi/MyBook: umount failed: Operation not permitted
pi@Home-Assistant:~ $ sudo umount /dev/sda1

In HASS still nothing happens, nothing in log etc. I added a command_line switch with command_on as sudo umount /dev/sda1 and it is at least properly logged as failed

17-01-13 20:04:45 homeassistant.components.switch.command_line: Command failed: sudo umount /dev/sda1

Should I now try with sudo chown ?

I changed to hass user in terminal (stupid me I didnā€™t try it before) and permissions seem to be the problem:

hass@Home-Assistant:/home/pi$ umount /dev/sda1
umount: /media/pi/MyBook: umount failed: Operation not permitted

Updated:
Ok, tried sudo chown hass "/media/pi/MyBook" and after trying to unmount as hass I still get permission problems:

hass@Home-Assistant:/home/pi$ umount /dev/sda1
umount: /media/pi/MyBook: umount failed: Operation not permitted

ā€¦Should I add hass to sudousers?