A few months I decided to switch out my cloud-connected Blink cameras with a self-hosted solution. The Blink cameras work by recording clips for motion events only. I wanted to achieve the same thing only without reliance on the cloud (and battery power).
I went shopping for POE cameras and ordered a few Reolink RLC-410-5MP. These are well priced and seemed to be fairly well regarded. I then set out looking at NVR software options. The main contenders I could see in this space were Blue Iris, Zone Minder, and Shinobi. For what I wanted to achieve though, these all seemed overkill.
In the meantime, the cameras arrived and I quickly discovered that these Reolink cameras have their own in-built motion detection and are able to upload clips to an FTP server upon detection. This made me think, do I even need to run NVR software?
The only thing I could see missing was a way to view the recorded clips (i.e. an equivalent to the Blink app). This was easily solved though as I already had a Plex server up and running. 2 minutes to add a new “CCTV” library and problem solved. I’d already replicated what the Blink system offered me and improved on it as Plex works everywhere (and not just on my phone).
Unfortunately, I quickly discovered that the inbuilt motion detection can result in a lot of false positives. I don’t mind the disk usage but the Plex library was filling up with videos of trees swaying and made it difficult to see actual events. I think some models of the Reolink cameras are able to detect people and cars but not the ones I’d bought.
So I added Deepstack (open-source object detection) to the mix. And to integrate it I developed a small Python application. The final workflow is as follows:
- Cameras detect motion using the inbuilt motion detection.
- Cameras upload the motion clips to an FTP server.
- The Python application uses a file watcher to detect new files uploaded to the FTP server and waits for them to be closed (for the file to be completely written).
- The Python application then loops through every 15th frame in the video (roughly one for every half second of video) and asks Deepstack if it detects a person in that frame.
- If a person is detected, it stops looping through the frames and moves the file to an “Accepted” folder. This is the folder that is hooked up to the Plex library. It also saves a still image of the frame containing the detection.
- If the end of the video is reached and no person was detected then it moves the files to a “Rejected” folder. I could just delete it but, just in case, I keep these rejected videos around for a bit.
All of the above runs in a series of Docker containers. I also have cron jobs set up to delete videos after a certain amount of time.
These cameras are also integrated into Home Assistant in two ways. Firstly, I use the ONVIF integration to provide live feeds in Home Assistant. This is very useful for notifications such as for my doorbell or alarm as they can include the live feed in the notification itself.
The second way I wanted to integrate all this with Home Assistant is to allow detections to trigger my Home Assistant alarm. To achieve this I’m using the Folder Watcher integration. This is targeted to the directory in which my Python app saves the detections still images. When an image is updated it fires an automation to trigger the alarm.
I can’t imagine anyone else will want to set all of this up in the same way but perhaps part of what I’ve done will be useful to someone else. If so, my Home Assistant config is also on GitHub for reference.