Shell_command: Permissions Help

I just moved my HA from docker within Ubuntu, to a KVM image. I have a shell_command that runs a python script that I wrote to get an image from a camera and save it to a directory. If I run this script manually (in the SSH Terminal add-on), everything works great. If I run it through HA under Developer Tools > Services, it fails to write the file. The automation also fails, and I’m assuming it’s the same reason. I have the following in my HA /config directory:

The image should save to the ha_www_images directory. If I change my script to write just to /config/, it works just fine, but I can’t have the file saved there. The ha_www_images directory is an NFS share from the Ubuntu host.

My shell_command script looks like this (I commented out a ton trying to figure out why it was failing):

#import argparse
import urllib.request
import uuid 
import requests
#from PIL import Image

# parser = argparse.ArgumentParser(description='This scrtipt is used to get a snapshot image and save it with a unique name.  It then updates home assistant with the file name to use later.')
# parser.add_argument("--img_url", default="https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png", help="http link for image")
# parser.add_argument("--img_ext", default="jpeg", help="image type, ie: jpeg, png, etc")
# parser.add_argument("--save_loc", default="/config", help="where to save the image on the hassio instance")
# parser.add_argument("--ha_url", default="", help="url for home assistant")
# parser.add_argument("--entity_id", default="", help="entity_id to update with new file name")
# parser.add_argument("--ha_token", default="", help="long life token from Profile > Long-Lived Access Tokens")
# args = parser.parse_args()

def main():
        file_name = str(uuid.uuid4()) + '.jpeg'
#        print(file_name)
        urllib.request.urlretrieve('http://192.168.2.31/snap.jpeg', '/config/ha_www_images/' + file_name)
#        img = Image.open(args.save_loc + file_name)
#        new_img = img.crop((600,0,1200,800))
#        new_img.save(args.save_loc + file_name)
        ha_url = 'http://192.168.2.102:8124/api/states/sensor.front_camera_snap'
        ha_headers = {
                'Authorization': 'Bearer <removed token>'
#                'Authorization': 'Bearer ' + args.ha_token
                , 'content-type': 'application/json'
        }
        ha_payload = '{"state":"'+file_name+'"}'
        x = requests.request("POST",ha_url,headers=ha_headers, data=ha_payload)
if __name__ == '__main__':
    main()

The fact that it works if I change the urlretrieve line to '/config/' instead of '/config/ha_www_images/' makes me believe it’s a permissions issue, but as you can see above, this folder is 777’d. Is there anything you see wrong with this script?

maybe this can help you…

https://community.home-assistant.io/t/capture-camera-snapshot-with-timestamp-using-python-script/31686

This works for me having the python script in my folder / file ‘/config/python_scripts/snapshop_front.py’
The images save to /config/www/wyzecam_frontroom

i call it during an automation like:

  action:
  - service: python_script.snapshot_front  

and i had to have

python_script:

in my configuration.yaml file

Now, Help a n00b out here a bit…
You went from a ubuntu linux install, Ive installed ubuntu a handful of times, but strictly for some reason or another and never is something ive used multiple times, god forbid i ask, but can you shed some light on lamens terms of what a ‘docker’ install is? im assuming this is like installing a program in windows? and the words ‘kvm’ only come to mind when I think of a kvm switch i used to use back in the day to share a keyboard between two computer screens, so a kvm based install, im clueless on what that means, but, can i ask that its just a standard homeassistant image that has its own built in everything else? I run a standard homeassistant image, coming into this when hassio existed and then switched to home-assistant full images. I know the above is all stuff i can do literally endless amounts of googling on, and have, but the over-complicated vs chill explanation is still somewhat having me lost. Any pros or cons to either or that you’ve used? reasoning from ‘docker container’ to full fledged kvm image?

Thanks! hope whatever i posted above helps someone else out there too.

If I run this script manually (in the SSH Terminal add-on), everything works great. If I run it through HA under Developer Tools > Services, it fails

I just had something similar happen trying to execute a shell_command to reboot a pi. It would execute from the command line but nowhere else. I was getting a 255 error in the HA server logs when running outside the command line, though.

I had to add keys to /config/ssh and then specify that location when calling the shell command. IIRC terminal uses /root/.ssh but HA doesn’t. I also needed the community version of ssh vs the addon version of ssh.

My shell_command looks like this:

shell_command:
  restart14: ssh -i /config/ssh/id_rsa [email protected] sudo shutdown -r

Thread I got my answer from

My actual shell_command: looks like this (I created the _tst one to remove all the arguments and just manually entered them into my python script):

shell_command:
  get_image: python3 /config/scripts/get_img.py --img_url="{{ img_url }}" --img_ext="{{ img_ext }}" --save_loc="{{ save_loc }}" --ha_url="{{ ha_url }}" --entity_id="{{ entity_id }}" --ha_token="{{ ha_token }}"
  get_image_tst: python3 /config/scripts/get_img_tst.py

I wasn’t seeing anything in my error logs in HA, but I will turn up logging and try this process again to see if there’s anything that stands out to me.

It may be a case that this doesn’t work as a shell_command but does as python_script, I can make that change and test, but the script does work if I change the path that it is saving the image to, so that makes me think this won’t resolve the issue.

To your question on Docker, etc. I can attempt to answer it, but I’m by no means an expert in it. Docker basically lets you do some form of virtualization of software. KVM is the built in virtualization tools in Ubuntu. The difference is that KVM virtualizes an entire system, where as Docker virtualizes the OS, but still uses the underlying OS for some stuff. Running Docker on Ubuntu with HA isn’t supported. When I went to using Docker on Ubuntu, I knew this, but didn’t think it would cause me as much heartburn as it did. I started getting to the point that in order to upgrade HA, I needed to reboot the whole system to clear some errors that would then let me process the upgrade.

With a KVM install, I can create an entire VM that runs HassOS. So less of the host Ubuntu changes will impact HA and now my HA system is running in a supported manner. No more issues with upgrades, etc. The KVM image uses a good bit more system resources on my NUC, but that’s ok for my purposes.

My layman’s explanation is this. If you’re not comfortable with any of this, run HassOS on the Pi4. If you have a NUC around for other things, you can create a HassOS image on it and that keeps you on the easy button approach. If you know a lot about containers and Docker, then go for it, but you’re more on your own when you run into errors.

I have to admit, I’m not entirely sure what I did to get this to work, but it is now working. I have the following in my shell_command section now:

shell_command:
  notify: python3 /config/scripts/notify.py --msg "{{ message }}" --file_name "{{ states('sensor.front_camera_snap') }}" --token "{{ token }}" --space "{{ space }}" --type "{{ type }}"
  get_image: python3 /config/scripts/get_img.py --img_url="{{ img_url }}" --img_ext="{{ img_ext }}" --save_loc="{{ save_loc }}" --ha_url="{{ ha_url }}" --entity_id="{{ entity_id }}" --ha_token="{{ ha_token }}"
  mount_images: mkdir -p /mnt/ha_www_images; mount -t nfs4 192.168.2.101:/home/hassio/share/ha_www_images /mnt/ha_www_images

I have an automation that runs the mount_images piece on Home Assistant startup. I see the directory in Portainer within HA, but I cannot see it within the SSH add-on. Perhaps this was working before I realized it, but I never saw the files show up when I was looking in the SSH add-on. Appreciate the extra eyes while I did troubleshooting. Here’s a summary of what my automation actually does.

Action 1 in the automation is this:

service: shell_command.get_image
data:
  entity_id: sensor.front_camera_snap
  ha_token: <redacted>
  ha_url: 'http://192.168.2.102:8124'
  img_ext: jpeg
  img_url: 'http://192.168.2.31/snap.jpeg'
  save_loc: /mnt/ha_www_images/

My get_img.py looks like this:

import argparse
import urllib.request
import uuid 
import requests
from PIL import Image

parser = argparse.ArgumentParser(description='This scrtipt is used to get a snapshot image and save it with a unique name.  It then updates home assistant with the file name to use later.')
parser.add_argument("--img_url", default="https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png", help="http link for image")
parser.add_argument("--img_ext", default="jpeg", help="image type, ie: jpeg, png, etc")
parser.add_argument("--save_loc", default="/config", help="where to save the image on the hassio instance")
parser.add_argument("--ha_url", default="", help="url for home assistant")
parser.add_argument("--entity_id", default="", help="entity_id to update with new file name")
parser.add_argument("--ha_token", default="", help="long life token from Profile > Long-Lived Access Tokens")
args = parser.parse_args()

def main():
        file_name = str(uuid.uuid4()) + '.' + args.img_ext
        print(file_name)
        urllib.request.urlretrieve(args.img_url, args.save_loc + file_name)
        img = Image.open(args.save_loc + file_name)
        new_img = img.crop((600,0,1200,800))
        new_img.save(args.save_loc + file_name)
        ha_url = args.ha_url + '/api/states/' + args.entity_id
        ha_headers = {
                'Authorization': 'Bearer ' + args.ha_token
                , 'content-type': 'application/json'
        }
        ha_payload = '{"state":"'+file_name+'"}'
        x = requests.request("POST",ha_url,headers=ha_headers, data=ha_payload)
if __name__ == '__main__':
    main()

Then action 2 turns on a light. Action 3 then shows the image on Shield:

service: notify.tv_screen
data:
  data:
    color: grey
    duration: 5
    file:
      url: 'http://192.168.2.31/snap.jpeg'
    fontsize: large
    interrupt: 1
    position: bottom-right
    transparency: 25%
  message: Gate Opened
  title: Home Assistant

Actions 4 & 5 use Google Chat to send the image to 2 chatbot users:

The automation looks like this:

service: shell_command.notify
data:
  message: Gate has been opened
  space: <redacted>
  token: <redacted>
  type: gate_image

And notify.py looks like this:

from httplib2 import Http
from json import dumps
import argparse

parser = argparse.ArgumentParser(description='This scrtipt is used tp push messages to a Hangouts Chat room.  It is used by Home Assistant to send notifications.')
parser.add_argument("--msg", default="default message content", help="This is used to send the message content")
parser.add_argument("--type", default="default message content", help="gate_image or text")
parser.add_argument("--space", default="<redacted>", help="Google chat space key")
parser.add_argument("--token", default="<redacted>", help="Google chat token")
parser.add_argument("--file_name", default="default message content", help="file to send to chat")
args = parser.parse_args()

def main():
		url = 'https://chat.googleapis.com/v1/spaces/' + args.space + '/messages?key=<redacted>&token=' + args.token
		if args.type == 'gate_image':
			img_url = 'https://<fqdn.com>/ha_www_images/' + args.file_name
			bot_message = {
					'cards' : [
						{'sections':[
							{'widgets':[
								{'image':
									{'imageUrl': img_url
									,'onClick':
										{'openLink':
											{'url': img_url
											}
										}}}]}]}]}
		else:
			bot_message = {
				'text' : args.msg}
        
		message_headers = { 'Content-Type': 'application/json; charset=UTF-8'}
		print(bot_message)
		http_obj = Http()

		response = http_obj.request(
				uri=url,
				method='POST',
				headers=message_headers,
				body=dumps(bot_message),
		)

		print(response)

if __name__ == '__main__':
    main()