Introduction
I have been using Unifi Protect for some time now, and love the cameras and the ability to have Continuous Video Recording without any subscriptions. However Unifi has been slow to add detailed controls over motion detection; There is a basic sensitivity slider with no ability to troubleshoot what its doing or why. Adding multiple zones works ok to reduce noise of such a limited system, but overall its lacking.
Unifi Protect also lacks the ability to integrate nicely in any way to external systems; MQTT? WebHooks? Nope, just email or push notifications.
However, these problems will always exist when triggering alerts based on motion; Shadows from trees blowing in a good gust, bugs going bonkers for infra-red, heavy rain & snow, and naturally neighbourhood animals like cats, birds, and the occasional dog on the loose. So this is what I have come up with, and I hope its useful for you.
Lastly a huge shout out and thank you to all the developers platforms, components, and examples I used to pull this all together; I’ve done my best to mention everyone throughout the write up.
Disclaimer
I hate to point out the obvious, but myself and any component authors are not responsible for cloud consumption and related billing, privacy, the safety, and protection of your property or persons or anything else whatsoever. The following is not a security system and I take no responsibility or indemnification for your protection, safety, or loss. You must always have this or any related requirements handled professionally. The following is a guide that should ultimately be considered an amateur project that is unreliable and cannot be suitable to meet your security standards and requirements. Without further adieu this is what I cobbled together:
Benefits
So far I have been able to achieve the following improvements to camera automation:
- Count the number of people detected and return this with the primary detection (e.g. 1 Person Detected)
- Determine the difference between People, Emergency Services (Police, Firemen, Paramedics, etc), Bugs, and Animals. Rekognition returns a very robust set of detections, so you will be able to customize to your needs.
- Enabling/Disabling alerts for select cameras. Sometimes I want to hangout on the patio or mow the lawn, and kill select cameras but leave the other detecting.
- Include a snapshot for push notifications. Tying this all together, I can get a fast glimpse and the detecting camera to see if I care. I need to know if people are approaching, not another neighbourhood cat out for a rip. Compared to Unifi’s app, I can do this instantly vs a 10 second+ load time.
- Trigger Fido the fake guard dog to bark on all media_players if a person (not emergency services) is detected if we are away, unoccupied, and guest mode is off.
- Triggering Rekognition based on tunable motion detection in Kerberos.io lets you manage your costs. Half way through July my bill is approximately $0.25 CAD. Compared to $100 CAD for a Coral, I am still ahead on ROI for quite some time (Only at the cost of personal privacy with Amazon :)).
Design Specific Choices
The following design choices for this solution have mostly come from personal preference and level of effort.
First was selecting Kerberos.io to perform motion detection on a camera stream; There are plenty of comparisons online if you want to deep dive others, but I selected Kerberos.io since it was easy to spin up in docker, and the GUI can be used to setup motion detection and output webhooks in about 2 minutes. It runs pretty well on my super old Intel E5200 machine I run home-assistant on, but ultimately I hope to eliminate this dependency when Unifi Protect improves.
Secondly, I am really excited to see where blakeblackshear’s Frigate gets to, but it has a level of configuration complexity I wasn’t willing to dive into (or probably capable of), and dependency on Google Coral hardware that is specialized hardware. Maybe in the future, but for now this is where offloading that complexity to AWS Rekognition was a natural choice, with caution towards cost management (setup billing alerts!).
What You Need
The following setup depends on many pre-requisites:
-
Note this tutorial is linux/debian based
-
Docker Host
-
AWS Account
-
Home-Assistant
- Configuration is also available on my github
-
Node-Red
-
ludeeus’ Home Assistant Community Store (HACS)
- Please see installation and configuration on the github page.
- Custom Component (via HACS): robmarkcole’s AWS Rekognition Component
-
Mosquitto (or any MQTT)
-
Pushover (or equivalent, just personal preference)
-
Optional: CCOSTAN’s guard dog
Architecture
- Unifi Protect exposes RTSP through the controller
- Kerberos.io pulls the RTSP stream for motion detection, and calls a webhook on Node-Red
- Node-Red receives the event, triggers image processing in HASS, then gets the results and performs enrichment and detection, then triggers alerts and Fido.
- Home-Assistant receives the detection and displays the top recognized result
Configuring Kerberos.io (KIOS)
There are a few options to get started with KIOS but ultimately I decided to clone the repo and use bin/dockeros.sh to get my containers online. Unfortunately you need a 1:1 ratio of containers to cameras, but bin/dockeros.sh keeps this down to 2 instead of 4.
- Clone the repo and change to the environments directory:
git clone https://github.com/kerberos-io/docker.git
cd kerberos-io/bin/environments
- Setup your camera XML for configuration storage. I duplicated the examples from the repo, and renamed the folders according to my needs:
- Change back to the bin folder and use dockeros to create the containers based on your environments folders. The first port is your web gui port, the second is the streaming port (not used for this).
cd kerberos-io/bin/
./dockeros.sh create front_door front_door 32781 32782
./dockeros.sh create back_door back_door 32783 32784
- If everything went well, you should have KIOS instances for your cams:
- Head on over to your KIOS instances and configure as you like. My examples are below. For this I just used the low quality Unifi Protect RTSP URL and a resolution of 640x480, but you could use higher quality if desired.
IP Camera:
Redacted Motion Example:
Sensitivity (Tweak as you like):
Output (I used a webhook here since I preferred the throttle feature, but mqtt would work too):
Repeat for all your cams and you should be done!
HASS Cameras & Rekognition
I won’t cover installation of the rekognition component, and I recommend HACS to accomplish this.
- Setup a basic still image camera, you can use Unifi’s snapshot feature for this:
camera:
- platform: generic
still_image_url: http://192.168.2.81/snap.jpeg
name: 'Front Door'
- platform: generic
still_image_url: http://192.168.2.80/snap.jpeg
name: 'Back Door'
- Configure the rekognition component, use your own region and keys as needed:
image_processing:
- platform: amazon_rekognition
aws_access_key_id: !secret aws_access_key_id
aws_secret_access_key: !secret aws_secret_access_key
region_name: us-west-2
target: Person
source:
- entity_id: camera.front_door
- entity_id: camera.back_door
- Define an Input Select to manage alerts:
camera_alerts:
name: Camera Alerts
options:
- Enabled
- Front Door
- Back Door
- Disabled
- Define mqtt sensors for detections
sensor:
- platform: mqtt
name: "Front Door Detection"
state_topic: "image_processing/front_door/detection"
force_update: true
- platform: mqtt
name: "Back Door Detection"
state_topic: "image_processing/back_door/detection"
force_update: true
- Restart HASS
Node-Red Automation
We now have all of our foundational components, let’s start tying this together with node-red. I am no expert by any means, so if you have optimizations and simplifications that would be great to hear!
Let’s break down the flow to its core components, the following just uses the camera ‘back door’, but you can see it is duplicated per camera where necessary. See the attached flow for the full example and importing into your own node-red instance.
- The flow starts by receiving the webhook from KIOS and triggering Rekognition image_processing. A rate limit is included to manage the amount of notifications, and once the results are returned we grab the enrichment data and set a camera image snapshot source. We only really care about the state data in node 4 where it drops the results into the defaults msg.payload and msg.data.
- Set the topic for flow handling, define the snapshot for the camera, and set the count of how many objects that matched the target were returned.
- Check the state of input_select and do some basic regex to determine if the process should continue. If the state isn’t Enabled or the camera’s name then we assume disabled with this statement.
- This is where I feel the flow gets a little messy/sprawled, but connect each camera flow to the detection flows. I’ve not show every single flow, but they are effectively the same. There is probably a JS function that replaces this, but I’m not there yet.
- Optional: Snag the attributes, this is an array of the detections. Pick the top attribute and send it to the camera’s MQTT topic for display in lovelace.
The determination function for node 8:
var value = Object.keys(msg.payload)[0];
msg.payload = value;
return msg;
- Next is choosing your detections. It would be great to optimize this into one node, but I haven’t figured that out yet. Rekognition can return different descriptions, so I have included what I have discovered so far; e.g. Person and Human are effectively the same so I have used a small rate limit to effectively de-duplicate. A detection confidence rating of 90 has been fairly effective for me so far.
Note: Google some images for your home town police, fire, paramedics. Rekognition worked fine for my police and fire, but detected paramedics as ‘officers’. I just didn’t want fake Fido barking at them. - Based on the detections you chose, prepare the final triggers. The following fires fake Fido and sends a pushover push.
The message structure function for node 19:
msg.payload = msg.count+" "+msg.detection+" Detected"
return msg;
Final Results
A picture elements card for lovelace showing the top detection:
elements:
- camera_image: camera.back_door
camera_view: live
style:
heigth: 100%
left: 50%
top: 50%
width: 100%
type: image
- entity: sensor.back_door_detection
style:
background-color: 'rgba(255, 255, 255, 1)'
bottom: 0px
font-weight: bold
right: 0%
transform: initial
type: state-label
image: /local/camerablank.png
type: picture-elements
Redacted example of detection:
A redacted pushover notification detecting an Amazon Delivery:
Close Out
If you have any other questions or need to see the full config check out my github. You can find the full node-red flow here for your importing: