Shinobi or Zoneminder?

Just want to share what i did to continue using zm. I built my own image plagiarizing most of dlandon’s repo. I use TPU for object detection and GPU for opencv/face detection and works fine.
However I then read about mlapi (same developer of zmeventserver) which basically takes the analysis out of zm and in short it’s just much faster. I also created a docker image for it and have been using it successfully for 2 months.
You’re welcome to try them; however, I must clarify that I’m not a developer at all and I’ve put this images together by reading/copying and lot’s of trial and error, so I’m not in a position to offer any significant support.
You’ll find them here Docker Hub

mlapi
zm136

I’ve been a long time user of Zoneminder, even writing a series of blog posts about integrating it with Home Assistant and using zmEventNotification for person detection.

I also bought a coral.ai TPU to experiement with and have since switched to Frigate: blakeblackshear/frigate: NVR with realtime local object detection for IP cameras (github.com)

It was a bit reluctant to leave Zoneminder behind, as it’s worked quite well for me for a long time, but I like Frigate’s cleaner way of handling the main stream and substream from camera - so you do detection on the low res and record high res. Zoneminder can do it with linked monitors, but your interface is then cluttered with both, whereas Frigate treats them as a single camera with two inputs.

It’s working really well for me and integrates smoothly into Home Assistant. Definitely worth a look.

1 Like

I also used ZM for like 10 years so switching was a difficult choice. When I saw just how easy it was to stand up a Frigate docker, point it at all my cameras and start detecting things with ML and using that to trigger recordings and notifications – that was it.

@seanb-uk, don’t you want that the other-way around? If you perform detection on a low-res stream how are you going to accurately determine if it’s a cat or a dog? I have mine flipped: high-res detection, low-res clips.

I’m only really interested in person detection, and that seems to work fine on the 640x360 substreams of my cameras. Apparently, from the docs, you should “choose a camera resolution where the smallest object you want to detect barely fits inside a 300x300px square” because the machine learning model is trained on 300x300px images.

That would suggest I’d need a slightly higher resolution (given that the objects themselves only take up a small portion of the 640x360 view), but I haven’t found it to be a problem.

In general, I think it’s better to use the low res stream if you can, as it uses less resources. I also have those streams set to a low frame rate - 5fps.

I wouldn’t use the low res for clips though - I want those to be as clear as possible, so I use the high res stream at a higher frame rate for detailed, smooth video. Since that’s just being passed through, and not analysed at all, the CPU use is minimal.