[AppDaemon] Tutorial #4 Libraries & Interactivity

Hi everybody!

Recently I gave a user some feedback who was looking to convert their YAML Automation file to the app-based Python structure of AppDaemon. It dawned on me that a lot of users who are using AppDaemon are seeking the extra flexibility that Python can afford to offer, but may not necessarily be familiar with how to write Python in the first place! Some may have a CS background, while others may not. This post is part of a tutorial series of sorts wherein I tackle a problem and show both a simple and complex way to write an AppDaemon app! The simple version will be ready-to-use right out of the box. The complex version will also run “out of the box”, but you’ll likely want to tweak it to work specifically with your own setup! I will go over more advanced concepts and style guide mentions here.

##Tutorials

  1. Tracker-Notifier - monitor devices’ on state, and receive a notification after a user-specified length of time
  2. Errorlog Notifications - have a persistent notification appear on the dash any time AppDaemon errors out
  3. Utility Functions - create general purpose functions that other apps can refer to in order to simplify your code
  4. Libraries and Interactivity - learn to use others’ code through import, read through documentation, and give your users control over portions of the app

In the easy version, I will try to break the solution down in simple terms so that someone who knows absolutely nothing about Python can understand them. The more complex example will also have a brief description explaining general logic, but will be targeted towards those of you who have a better grasp of the fundamentals.

I hope you enjoy!


#Tutorial #4 - Libraries and Interactivity

In tutorials 1-3, we’ve learned many of the basics of writing a good AppDaemon app with Python. We’ve learned…

  • to centralize datapoints in various containers like list and dict
  • to work with key functions in AppDaemon like self.run_in() and self.get_state() and self.log()
  • to pass data to various parts of our app through the use of self and kwargs
  • the importance of working with and manipulating handles in the AppDaemon Scheduler
  • provide direct feedback of what’s going on in our apps with self.notify() and persistent notifications
  • how to organize our apps and common functions in a single app so that updating and upgrading all other apps is easy and painless to do

…which are all principles of writing a great app. Much like in lesson#3, we learned that it wasn’t necessary to reinvent the wheel every time we write a new app, lesson#4 will show you two more powerful functions of python and AppDaemon: how to utilize external libraries, code that helps you do much more complex things, and how to give the user some control over what happens within the app itself.

The way you can utilize others’ code is through the use of the import statement. We’ve been doing this all along with import appdaemon.appapi as appapi! I won’t go into what’s going on under the hood here, but it’s important to know that the idea of import is similar how we use self.get_app(). :slight_smile:

Before you move on, you’ll first want to get yourself RaspberryPi Camera Module and run through the setup. Do everything up until you get to “Usage”.

— simple —

For today’s lesson, we’re going to have two objectives:

  1. Take a snapshot picture or a video using the camera module.
  2. Take a time-lapse video when prompted and save to a user-defined location

I’m going to go through this lesson a bit faster than I have in our previous ones, as there’s a lot of content here! This app isn’t as much plug’n’play as our last ones either, but I will show you where you need to change to make it your own. We’re going to have a lot of fun putting together a lot of the parts we’ve learned in all our lessons so far, let’s get started! :smiley:

import appdaemon.appapi as appapi
import imageio
import re
from picamera import PiCamera
from glob import iglob
from threading import Thread

#
# App that uses the Camera module attached to the RPi
#
# Prequisites
# sudo apt-get update
# sudo apt-get install python3-picamera imageio
#
# Args (set these in appdaemon.cfg)
# non-required arguments are starred below..
#
# width = resolution on the x axis
# height = resolution on the y axis
# rotation = **number of degrees to rotate the camera
# hflip = **flip the camera on the x axis
# vflip = **flip the camrea on the y axis
# save_path = dir to save snapshots and animations !! needs write permissions
# timelapse_dir = the directory to save timelapses in !! needs write permissions
# timelapse_width = resize the timelapse image resolution on the x axis down to
# timelapse_height = resize the timelapse image resolution on the y axis down to
# create_tl_gif = **single|infinite .. create a gif after timelapse finishes
#
# Events
# 'camera_start' = initialize the camera
# 'camera_stop' = stop the camera entirely
# 'camera_snap' = take a picture/snapshot
# 'camera_video_start' = start a video
# 'camera_video_stop' = stop a running video
# 'camera_start_timelapse' = start a timelapse sequence; taking pictures at 
#                            every number of seconds in 'delay', if 'delay' is 
#                            not set take a picture every 30 mins
# 'camera_stop_timelapse' = stop a timelapse sequence
#
# # Apps
#
# [camera]
# module = camera
# class = Camera
# dependencies = utils
# width = 1920
# height = 1080
# rotation = 180
# save_path = /home/homeassistant/.homeassistant/appdaemon/conf/camera
# timelapse_dir = /home/homeassistant/.homeassistant/appdaemon/conf/camera/timelapse
# timelapse_width = 1280
# timelapse_height = 720
# create_tl_gif = single
#

class Camera(appapi.AppDaemon):

    def initialize(self):
        self.utils = self.get_app('utils')
        self.timelapse_running = False

        self.listen_event(self.setup, event='camera_start')
        self.listen_event(self.teardown, event='camera_stop')
        self.listen_event(self.snapshot, event='camera_snap')
        self.listen_event(self.start_video, event='camera_start_video')
        self.listen_event(self.stop_video, event='camera_stop_video')
        self.listen_event(self.start_timelapse, event='camera_start_timelapse')
        self.listen_event(self.stop_timelapse, event='camera_stop_timelapse')

        self.setup(None, None, None)

    def terminate(self):
        self.teardown(None, None, None)

    def setup(self, event_name, data, kwargs):

        # picamera docs
        # http://picamera.readthedocs.io/en/latest/api_camera.html

        self.log('Warming up the camera.')
        self.camera = PiCamera()
        self.camera.resolution = (int(self.args['width']), int(self.args['height']))

        if 'rotation' in self.args:
            self.camera.rotation = self.args['rotation']
        if 'hflip' in self.args:
            self.camera.hflip = True
        if 'vflip' in self.args:
            self.camera.vflip = True

        if 'timelapse_width' not in self.args:
            self.args['timelapse_width'] = 640
        if 'timelapse_height' not in self.args:
            self.args['timelapse_height'] = 480

        self.camera.start_preview()
        self.log('Camera initialized!')

    def teardown(self, event_name, data, kwargs):
        self.log('Tearing camera down..')
        self.camera.close()

    def snapshot(self, event_name, data, kwargs):
        self.log('Taking a single snapshot!')
        self.camera.capture(
                output='{}/{}.jpg'.format(self.args['save_path'],
                                          self.datetime().strftime('%Y-%m-%d_%H:%M:%S'))
            )

    def start_video(self, event_name, data, kwargs):
        self.log('Starting a video!')
        self.camera.start_recording(
                output='{}/{}.mjpeg'.format(self.args['save_path'],
                                            self.datetime().strftime('%Y-%m-%d_%H:%M:%S')),
                resize=(1280,720)
            )

    def stop_video(self, event_name, data, kwargs):
        self.log('Stopping video!')
        self.camera.stop_recording()

    def timelapse(self, kwargs):
        if self.timelapse_running:
            self.timelapse_count += 1
            
            self.camera.capture(
                    output='{}/image{}.jpg'.format(self.args['timelapse_dir'],
                                                   self.timelapse_count),
                    resize=(int(self.args['timelapse_width']), 
                            int(self.args['timelapse_height']))
                )

            self.run_in(self.timelapse, seconds=kwargs['delay'], delay=kwargs['delay'])

    def start_timelapse(self, event_name, data, kwargs):
        if self.timelapse_running:
            raise RuntimeWarning(
                'Timelapse is currently running! Stop before running another timelapse.'
            )

        self.log('Starting a timelapse sequence!')
        self.timelapse_running = True
        self.timelapse_count = 0

        if 'delay' not in data:
            data['delay'] = 1800
        else:
            data = {'delay': data['delay']}

        self.timelapse(kwargs=data)

    def stop_timelapse(self, event_name, data, kwargs):
        self.log('Stopping the timelapse sequence!')
        self.timelapse_running = False
        self.timelapse_count = 0

        if 'create_tl_gif' in self.args:
            self.create_gif()

    def create_gif(self):
        
        if self.args['create_tl_gif'] == 'single':
            self.run_in_background(loop=1)
        
        if self.args['create_tl_gif'] == 'infinite':
            self.run_in_background(loop=0)

        # general notification call
        self.utils.notify_supahnoob(message='Your animation is being made! You '
                                            'can find it in {}'
                                            .format(self.args['save_path']))

    def run_in_background(self, loop):
        """
        Runs the command arguments in a Popen, and then calls an exit function
        when the background process completes
        """

        # imageio docs
        # http://imageio.readthedocs.io/en/latest/userapi.html#imageio.get_writer
        # http://imageio.readthedocs.io/en/latest/format_gif-pil.html#gif-pil

        def threaded_bg_proc(app, loop):
            save_location = '{}/animation.gif'.format(self.args['save_path'])
            images_fp = iglob('{}/image*'.format(self.args['timelapse_dir']))
            options = {"loop": loop, "fps": 10}

            with imageio.get_writer(save_location, mode='I', **options) as writer:
                for imgfile in sorted(images_fp, key=natural_sort_key):
                    image = imageio.imread(imgfile)
                    writer.append_data(image)

            message = 'Your animation is complete! Access it {}'.format(self.args['save_path'])
            app.utils.notify_supahnoob(message=message)

        kwargs = {'app': self, 'loop': loop}
        proc = Thread(target=threaded_bg_proc, kwargs=kwargs)
        proc.start()

def natural_sort_key(string):
    """
    See http://www.codinghorror.com/blog/archives/001018.html
    """
    return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string)]
        proc = Thread(target=threaded_bg_proc, kwargs=kwargs)
        proc.start()

def natural_sort_key(string):
    """
    See http://www.codinghorror.com/blog/archives/001018.html
    """
    return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string)]

First off, lets hit up the prerequisites section. This is part of your documentation that should tell your users exactly what they need to get up and running in your app. For our app today, we’ll want to make sure our distribution is up to date and to also have picamera and imageio. These two libraries are likely not included in all distributions, so we’ll want our users to have them read before trying to run the app.

OK! Now that we’re all ready to start up, we’ll go through our imports. Remember, import is a lot like self.get_app(). It’s a bunch of code someone else has written and localized, and we will simply pull it in to our program, or app. Below is a little understanding of each library and its uses to us.

imageio - used to help us create gif timelapses
re - provides pattern matching operations similar to those found in Perl
picamera - let’s us interface with the camera module
glob - Unix style pathname pattern expansion
threading - provides tools for working with multiple threads (also called light-weight processes or tasks)

We then define a whole bunch of arguments. We want our camera app to highly configurable, so that the user can get the most out of their hardware and do with it as they wish. These darn things are almost as expensive than the computer they’re running on, let’s give them some control over it! You can go through all the arguments yourself, but they’re fairly self-explanatory.

We’ve also got a section in our documentation for events. These are those fancy things that HomeAssistant itself is build upon. Cameras, by function, are interactive devices, so we’ll want to give our users some level of interaction with their device as well. We are going to have event listeners, and thus callbacks, for starting and stopping the camera instance itself, taking pictures/snaps and videos (objective #1) and starting and stopping timelapses as well (objective #2).

def initialize()

Our initialize is pretty straightforward this time. We pull in out utils app, set a single variable to keep track of the state of timelapses, and then register a whole bunch of event listeners. Finally we call the setup func to initalize the camera and that’s it!

def terminate()

I’m actually not sure if we’ve talked about this before … but this is another handy skeleton function that @aimc has come up with to help with teardown of your app. In our case, we definitely want to free up camera resources if our app gets reloaded which is why we call self.teardown() - we’ll go over what this does in just a bit.

def setup()

This is the first time we actually interface with the PiCamera API. We’ll instantiate a PiCamera() object which is just a fancy way of saying to get all the functions and instructions of working with the camera module. We’ll next set the resolution and other aspects of the camera itself, like rotation, horizontal and vertical flip, and then set default args for the timelapse resolution if they weren’t provided to us by the user. Finally we’ll open up the preview so that we know the camera is warmed up and ready to go. Now we’re ready to rock and roll!

Some notes about resolutions that should be made clear when using the hardware:

Modes with full field of view (FoV) capture from the whole area of the camera’s sensor (2592x1944 pixels), using the specified amount of binning to achieve the requested resolution. Modes with partial FoV only capture from the center 1920x1080 pixels. The difference between these areas is shown in the illustration below:

There are other options of the camera object that we can set, and you can learn more about that here.

def teardown()

This one is really simple. We’ll simply close the camera instance, so that we free up the camera resource. If we didn’t, we’d get a nasty error when we try to start the app up again. :frowning:

def snapshot()

In this function, we’ll take a single picture, given the output resolution defined by the user in self.args. You’ll see self.camera.capture() is just another function, like any other we’ve been working with! Sometimes, using an API or library really is that simple. You can find more about PiCamera().capture() here.

def start_video() & def stop_video()

Similarly to snapshot(), the camera API has a way for us to record video. There’s a start and a stop function to recording, and thus our app should have start_video() and stop_video() functions to match. This could be expanded upon and made a bit more robust, but these two functions give you explicit control over the module’s video stream.

def start_timelapse()

This function serves as an entry point to our timelapse feature. If the timelapse is currently running, we won’t be able to start another, so raise a RuntimeWarning if this happens. This will appear in your log, and even on the frontend if you’ve been following along in our tutorial series!

If a timelapse isn’t currently running we’ll increment the timelapse counter and then utilize the Picamera api to take a capture, or single picture, with the camera. You can find more about PiCamera().capture() here. In this case, we’re going to set a few of capture()'s arguments: output and resize, both of which are built upon our user’s specifications in self.args

def stop_timelapse()

Set stop flags on the timelapse, reset the image counter to 0, and if the user wants an automated gif of the timelapse, we’ll call create_gif().

def timelapse()

This function is entirely meant to be recursive, so that is calls itself over and over and over again until it’s told to stop. Our first line of logic in this function will check to see if we’re even supposed to keep running the timelapse, and if we are, increment the image counter by 1. Then we’ll come across something familiar … self.camera.capture() … wait a second, you mean, there’s no native timelapse functionality in PiCamera()?? :open_mouth:

Well, that’s really no problem for us, since we’re creating something entirely new on top of the API!

This is the strength of using libraries/APIs. The hard work is done for you, so you get to create really cool things using someone else’s well written code! After we call capture, we’ll schedule another timelapse image to fire in a number of seconds defined by the user.

def create_gif()

This function serves as an entrypoint for our gif creator. We’ll simply set an argument for if we want the gif to be on an infinite loop, and then send a notification to our user that the process is beginning. We don’t want to hang up our program during the creation process, and it could possibly take a while to do actually finish the gif, so we’ll let our user know once it’s all done.

def run_in_background()

First off, Thread allows us to run a function, or a whole program, in the “background” and not hold up our program at all. This is really exciting news for us, because we don’t want AppDaemon to just sit around waiting for one of our functions to complete.

We’ll define a couple of keyword arguments like loop and app. These two will help us keep our logger and notification functions in the threaded function. We’ll then start our background process… but let’s go over that first. In the threaded_bg_proc() we assign a variable to where our save_location is, as well as where all the images are held. iglob will look in the directory we list and put together an unordered list in memory for us to iterate through. We’ll also define some options we want the gif writer to use.

Next up, we’ll use an writer from imageio, and read through a sorted list of images (more on this in a minute) we have, putting together a gif. This portion of our code really shouldn’t be changed unless you know what you’re doing!

Finally we’ll call a notify function that we’ve written and put in our utils. This is just a general function, and you should replace this with something of your own volition here. This serves as a followup notification to what we initially said in create_gif().

def natural_sort_key(string)

There’s a bit of Python magic in this one, so don’t worry about this too hard. Essentially what this does is help us sort filepaths that have numbers at the end. It gives us a key to sort on, rather than doing an alphabetic-only sort. I’ll show the difference below in a short example :slight_smile:

>>> items = ['image1', 'image2',  'image11', 'image3', 'image4', 'image12', 'image5']

>>> sorted_items = sorted(items)
>>> better_sorted_items = sorted(items, key=natural_sort_key)

>>> sorted_items
['image1', 'image11', 'image12', 'image2', 'image3', 'image4', 'image5']

>>> better_sorted_items
['image1', 'image2', 'image3', 'image4', 'image5', 'image11', 'image12']

We obviously want the second option here!

Here’s a sample of the ouput! This was a timelapse of an hour and a half, shot in 1 minute intervals; about 100 images in total. Note, that the higher quality the image, and the number of images you’re sequencing into a timelapse, the larger the resulting filesize will be. Before I ran it through a compression service, it was 80MB… the gif linked above is roughly 17MB.


— complex —

The complex app is going to utilize class objects in order to listen for motion events over the video feed and then notify us of any such occurrence! I say this a bit more complex as it’s going to involve class structure, which I think I may introduce a bit later on to our newbies.

import appdaemon.appapi as appapi
import numpy as np
from picamera import PiCamera
from picamera.array import PiMotionAnalysis

#
# App that uses the Camera module attached to the RPi
#
# Prequisites
# sudo apt-get update
# sudo apt-get install python3-picamera
#
# Args (set these in appdaemon.cfg)
# non-required arguments are starred below..
#
# width = resolution on the x axis
# height = resolution on the y axis
# rotation = **number of degrees to rotate the camera
# hflip = **flip the camera on the x axis
# vflip = **flip the camrea on the y axis
# save_path = dir to save snapshots and animations !! needs write permissions
#
# Events
# 'camera_arm' = start sensing motion, arm the security camera
# 'camera_disarm' = stop sensing motion, disarm the security camera
#
# # Apps
#
# [camera]
# module = camera
# class = Camera
# dependencies = utils
# width = 1280
# height = 720
# rotation = 180
# save_path = /home/homeassistant/.homeassistant/appdaemon/conf/camera
#

class Camera(appapi.AppDaemon):

    def initialize(self):
        self.utils = self.get_app('utils')

        self.listen_event(self.start_sensing_motion, 'camera_arm')
        self.listen_event(self.stop_sensing_motion, 'camera_disarm')
        self.setup(None, None, None)

    def terminate(self):
        self.teardown(None, None, None)

    def setup(self, event_name, data, kwargs):

        # picamera docs
        # http://picamera.readthedocs.io/en/latest/api_camera.html

        self.log('Warming up the camera.')
        self.camera = PiCamera()
        self.camera.resolution = (int(self.args['width']), int(self.args['height']))

        if 'rotation' in self.args:
            self.camera.rotation = self.args['rotation']
        if 'hflip' in self.args:
            self.camera.hflip = True
        if 'vflip' in self.args:
            self.camera.vflip = True

        self.log('Camera initialized!')
        self.start_sensing_motion(None, None, None)

    def teardown(self, event_name, data, kwargs):
        self.log('Tearing camera down..')
        self.camera.close()

    def start_sensing_motion(self, event_name, data, kwargs):
        self.log('Starting video recording...')
        self.utils.notify_supahnoob(message='Starting motion sensor, let\'s watch \'em!')
        self.camera.start_recording(
                output='/dev/null',
                format='h264',
                motion_output=DetectMotion(camera=self.camera, app=self)
            )

    def stop_sensing_motion(self, event_name, data, kwargs):
        self.log('Stopping video recording...')
        self.utils.notify_supahnoob(message='Stopping motion sensor, you\'re vulnerable!')
        self.camera.stop_recording()


class DetectMotion(PiMotionAnalysis):
    
    def __init__(self, camera, app):
        self.camera = camera
        self.camera_app = app
        self.basepath = self.camera_app.args['save_path']
        self.img_number = 0

        width, height = self.camera.resolution
        self.cols = ((width + 15) // 16) + 1
        self.rows = (height + 15) // 16

    def analyse(self, a):
        # this is rudimentary, and can certainly be improved upon by someone
        # smarter than myself :)
        
        a = np.sqrt(
                np.square(a['x'].astype(np.float)) +
                np.square(a['y'].astype(np.float))
            ).clip(0, 255).astype(np.uint8)

        # If there're more than 10 vectors with a magnitude greater
        # than 60, then say we've detected motion
        if (a > 60).sum() > 10:
            self.camera_app.log('Detected motion!')
            self.img_number += 1
            self.camera.capture(
                    output='{}/motion{}.jpg'.format(self.basepath, 
                                                    self.img_number),
                    resize=(2592,1944)
                )

            self.camera_app.utils.notify_supahnoob(message='Motion detected!')

I won’t explain this app much, as a lot of the basics are covered in the simple version above. The complex app notably pipes the output of the video to /dev/null/ which is just a fancy way of throwing it away. This could certainly be send to a rolling buffer if you wanted to develop a “cache” of sorts. At that point, when motion is detected you could save that cache instantly as a video, instead of taking a snapshot. There are plenty of applications here and they will vary based on your use cases, but I hope this gives you a jumping off point!

The DetectMotion class is a really, really simple way of doing motion detection and it’s vulnerable to a lot of false positive. I take no credit for this code! It was copied verbatim from the documentation. :slight_smile: I encourage you to Google other methods and implement them on your own! The only require part of the class is that you write your own def analyse (yes, it must be exactly “analyse”, with an s!)

Another notable part of the complex app is that you’ll see we’re actually passing our full Camera app into DetectMotion. This allows us to preserve our utility functions and loggers. It’s a very important aspect to understand!

Not that you need an example output of the capture, but hey, motion2.jpg is below! :wink:

If you open this image in a new tab directly, you’ll see that this is actually a very massive resolution photo! REALLY slick if you’re trying to identify a perp. :cop:



Feedback is important! This series will only be as successful as you all make it to be! Let me know your thoughts and what you all would like to see, and I’ll consider all my options.

Happy Automating!

  • SN
9 Likes

This is good stuff. I don’t have HA setup on a Pi and also don’t have a camera to use to walk through it all myself but it was a great read nonetheless. Keep up to the great work!