Shinobi or Zoneminder?

The article is interesting because it make it 100% clear that ZM need much better hardware than Rpi

Thanks for the note, sadly I moved everything to Synology to consolidate and reduce the noise/heat.

Xeoma looks interesting as well, may as well give that a try too :slight_smile:

Way to drag up an old post, but…

I’ve tested MotionEye on and off for a while on an RPi and an Intel NUC with a J5005 CPU, but with a Synology NAS as my primary CCTV capture device - it just works so well. I’ve also tried Zoneminder in the past, I found it very difficult to get to work in docker, so never stuck with it. (I would love an AIO docker solution if anyone has one)

I wanted to ditch the NAS, an old DS115j and move to a single solution for HA, CCTV, and Plex for my media content, so decided to have a play around with an old Dell SFF Optiplex 990 running Ubuntu server 18.04.03 LTS, with a i5 2400, 8gb RAM and a 2TB WD Purple Drive I pulled out of the NAS.

I installed MotionEye using Docker, and got everything set up. CPU usage was sitting constantly in the region of 60%-70% usage with 4 cameras on continuous record. I find the motion detection in MotionEye to be unreliable and it drops frames constantly - 1 second I’m in the video, next second, gone. Power usage was constantly sitting at 80 watts, which in Australia becomes expensive for an always-on system. I know this a somewhat due to old CPU architecture. File sizes using Foscam FI9853EP’s were in the region of 800mb per 10min video on medium/high capture settings.

I stumbled across Shinobi, and decided to give it a go.

Installed using docker and it’s running very well. I’m very impressed with it so far. The learning curve for setup is much higher than MotionEye, but once I got the hang of it, I found it easy to configure.

CPU usage is around 5% and RAM usage is around 15% (of 8gb) and power usage is sitting at 30-35 watts - much less taxing on the system than MotionEye. Files sizes for a 15min video on roughly the same quality settings as MotionEye are around 240mb, significantly smaller for 50% longer record time.

Being that Frenck is no longer developing the community container for this, I have just added it to HA via an iframe.

I came searching on the forums for info about Shinobi as I couldn’t believe I hadn’t heard of it before, and found this thread, so thought I would share my experience with it so far. Perhaps the development of the software has improved and it is something that more HA should look into using.

4 Likes

Thanks for your contribution @kanga_who. I never went very far with this but I am grateful for the input, you may have reignited my interest. For one thing I am more comfortable with docker now.

This is the docker config I used.

sudo docker run -d \
  --name='Shinobi' \
  -e 'APP_BRANCH=dev' \
  -e TIMEZONE="Australia/Brisbane" \
  -p '8081:8080/tcp' \                                     **Changed port to 8081 as I have Unifi on 8080 already***
  -v "/dev/shm/shinobiStreams":'/dev/shm/streams':'rw' \
  -v "$HOME/shinobiConfig":'/config':'rw' \
  -v "$HOME/shinobiCustomAutoLoad":'/customAutoLoad':'rw' \
  -v "$HOME/shinobiDatabase":'/var/lib/mysql':'rw' \
  -v /media/hdd1/shinobi:'/opt/shinobi/videos':'rw' \     **I edited this line to point to my mounted 2TB HDD**
  'shinobisystems/shinobi:latest-ubuntu'
3 Likes

I’ve been using ZM for years too but I used it natively installed into Ubuntu for most of that time. I can confirm that it is extremely stable you just need some CPU power to crunch the images. About a two years ago I started getting into docker and I eventually realized there were several docker containers for ZM and around then I transitioned my natively installed version to a docker.

That was the best thing because upgrading ZoneMinder has always been a difficult chore for me. Docker takes much of the pain out of it because it won’t affect the system at large. About 2 months ago I found dlandon/zoneminder which is a ZoneMinder docker with built-in ML support for object detection (THIS HAS MQTT!!).

I used to have an HA automation that would watch the number of captures and if it incremented, I would notify my phone with something like “Front Door camera captured motion events 2=>3”. Anyway, this wasn’t very reliable but it mostly worked. I’ve used it like that for about a year.

I get a lot of spider webs and bugs that love to fly in front of the cameras triggering them throughout the day and night. Because the dlandon ML integration includes MQTT you can set it up to only push an MQTT event on successful object/face/license plate detection otherwise it still records to ZM but doesn’t create an MQTT message. I used to get notifications for all of the bugs and webs but now with object detection and MQTT I only get captures for stuff I care about. It’s much quicker alert and more reliable.

The thing I like most is that the capture image for the ML analysis gets saved with the detection zones. This last weekend I was able to revamp the HA ZM notifications and include the image in my notification! So now, when the camera is triggered, the ML fires up and does analysis on both the highest score alarm and snapshot frame, if it detects something then it sends an HTML5 notification with the details of the detection and the processed image. For example…

On my computer:
image

On my phone:

I’ve noticed that the notification happens within 30-60 seconds of the capture starting which is a big improvement to the previous notification methodology I was using. Anyway, I know ZM isn’t perfect and the built-in detection feels very dated but the zone setup does do a pretty good job of capturing stuff you care about with this ML integration.

FWIW: My ZM server hosts 6 cameras, they’re all Reolink cameras (RLC-410/RLC-511/RLC-520) and the server sits at a constant load of between 3 and 4 doing detection all day long. All are throttled to 10fps at 1280x720 or 1280x960 depending on the camera’s model. The captures are very high quality and with these settings I could record continuous days of data (maybe weeks) with the storage behind it if needed.

3 Likes

Wow, I have tried all different apps out there, Motioneye, Shinobi, Zoneminder etc. etc.

Agent DVR is the shit. Soo many options: https://www.ispyconnect.com/download.aspx
Official integration with Home Assistant: https://www.ispyconnect.com/download.aspx

Running it locally on a Windows PC.
Accurate Motion, MQTT, Alarm Panel in (home assistant), all cameras are published automatically to home asisstant.
No more lagging cameras using rtsp via home assistant etc.
In top of that I’m using Node-red to send image snapshot when motion to sighthound, to detect if there is any person.

My setup is very cool, hopefully I’ll have time to write a blog post any time soon.

1 Like

I think your second link is wrong.

cool! I am busy trying the docker option. I used to use iSpy many years ago, but found that when you start adding >5 camera’s it used to start to wig out a bit… Can you comment on cacmera numbers and performance at all?

it snuck in in .110

I tried AgentDVR on a Intel i7 (3.2ghz), 16GB system and it ran fairly well with my 13 cameras. Granted, I was only consuming the substreams (640x480@15FPS) instead of the 5mp streams.

I am still working on setting it up as a NVR, but other things have taken precedence.

what was the cpu running at? Were these H264 / H265 camera’s? was it a dedicated box?

CPU is typically right around 25%-35% with substreams. It’s on a dedicated(ish) box and the cameras are Reolink 410/520s POE cameras wired into a 16 port POE switch. The server also has PerconaDB and InfluxDB running on it as well.

This is it running right now:

2 Likes

Nice done! I have been struggling to get object detection to work in zoneminder. I’m using dlandon docker image too. You wouldn’t mind sharing your docker run config and your zoneminder config?

@patrikron, sure. The hardest part of the dlandon’s stuff is dialing in the zmeventnotification because there are a lot of options. Also, I noticed, since I’ve been using it for months now, that dlandon tends to update the thing periodically and the image is designed to update itself on restart. This wouldn’t be a problem normally, because I like stuff that keeps itself up-to-date, however, in this case I’d like it to remain static until I’m ready to pull the trigger on an update and keep my known-good working system working. Unfortunately, since the update code is baked into the image I have no control over it.

The last time I restarted the system, it pulled in an update I didn’t expect and my cameras started to detect objects that they never had before (like luggage, beds, etc.). I had explicitly removed those from the decision making in my config files but with the update the ini-file structure had changed enough that my old config file didn’t apply its values to the new expected input so it took the defaults for objects to detect which is wide open. So, I had to go back in and re-tweak all the definitions so they aligned with the new expected inputs.

I’ve found that ZM takes a while to dial in but once you get it properly configured, you’re good to go for a long time. We just need to get dlandon’s stuff on the same page. :slight_smile: but I ain’t complaining to him cause he’s done great work here and I’m just leaching it. :smiley:

Reference this link a lot: https://zmeventnotification.readthedocs.io/en/latest/guides/install.html#update-the-configuration-files don’t worry about the “installation” stuff but pay attention to the configuration instructions at the bottom and READ ALL THE COMMENTS IN THE CONFIG FILES.

Here’s my zmeventnotification.ini, I removed all the comments (which you should read to tweak it properly). As you can see, I don’t use SSL on my ZM server because I don’t allow it to the internet but I do have authentication enabled for my local intranet:

# Configuration file for zmeventnotification.pl 
[general]

secrets = /etc/zm/secrets.ini
base_data_path=/var/lib/zmeventnotification

use_escontrol_interface=no
escontrol_interface_file=/var/lib/zmeventnotification/misc/escontrol_interface.dat
escontrol_interface_password=!ESCONTROL_INTERFACE_PASSWORD

restart_interval = 0

[network]
port = 9001

[auth]
enable = yes
timeout = 20

[push]
use_api_push = no
api_push_script=/var/lib/zmeventnotification/bin/pushapi_pushover.py

[fcm]
enable = yes
use_fcmv1 = yes
replace_push_messages = no
token_file = {{base_data_path}}/push/tokens.txt
date_format = %I:%M %p, %d-%b

[mqtt]
enable = yes
server = !HASSIO_SERVER
username = !MQTT_USERNAME
password = !MQTT_PASSWORD
retain = no

[ssl]
enable = no
cert = !ES_CERT_FILE
key = !ES_KEY_FILE

[customize]
console_logs = yes
es_debug_level = 2
event_check_interval = 5
monitor_reload_interval = 300
read_alarm_cause = yes
tag_alarm_event_id = yes
use_custom_notification_sound = no
include_picture = yes
send_event_end_notification = no
picture_url = !ZMES_PICTURE_URL
picture_portal_username=!ZM_USER
picture_portal_password=!ZM_PASSWORD
use_hooks = yes

[hook]
event_start_hook = '{{base_data_path}}/bin/zm_event_start.sh'
event_end_hook = '{{base_data_path}}/bin/zm_event_end.sh'
event_start_notify_on_hook_success = all
event_start_notify_on_hook_fail = none
event_end_notify_on_hook_success = fcm,web,api
event_end_notify_on_hook_fail = none
event_end_notify_if_start_success = yes
use_hook_description = yes
keep_frame_match_type = yes
hook_pass_image_path = yes

Make sure you edit your secrets.ini file to point to the correct locations. If you use SSL on your ZM, your config will look quite a bit different than mine. Pay attention to the use_hooks and console_logs options and use the zoneminder log function to see what’s happening. This was invaluable to me because I was able to see the next thing that needed to be fixed in the config that way.

Here’s my objectconfig.ini file, my six cameras are listed at the bottom and the types of detection overrides I’ve defined. You can see in this file that I don’t have hardware ML enabled but I have a plan to upgrade the graphics card in this box so I can crunch a bunch of video (for cameras and plex) and I’m planning on getting an nvidia card that has the ability to forward TPU data to a docker. I don’t use alpr nor hog in my detection scheme.

# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

[general]
version=1.1
secrets = /etc/zm/secrets.ini
cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1
cpu_max_lock_wait=120
tpu_max_lock_wait=120
gpu_max_lock_wait=120
base_data_path=/var/lib/zmeventnotification
pyzm_overrides={'log_level_debug':2}
base_zm_conf_path=/etc/zm
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD
api_portal=!ZM_API_PORTAL
allow_self_signed=yes
match_past_detections=yes
past_det_max_diff_area=5%
detection_sequence=object,face
detection_mode=all
frame_id=bestmatch
delete_after_analyze=yes
write_debug_image=no
write_image_to_zm=yes
show_percent=yes
poly_color=(255,255,255)
poly_thickness=2
ml_user=!ML_USER
ml_password=!ML_PASSWORD

[object]
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=opencv
object_processor=cpu # or gpu
object_config={{base_data_path}}/models/yolov4/yolov4.cfg
object_weights={{base_data_path}}/models/yolov4/yolov4.weights
object_labels={{base_data_path}}/models/yolov4/coco.names

[hog]
stride=(4,4)
padding=(8,8)
scale=1.05
mean_shift=-1

[face]
face_detection_framework=dlib
face_recognition_framework=dlib
known_images_path={{base_data_path}}/known_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
unknown_images_path={{base_data_path}}/unknown_faces
face_num_jitters=1
face_model=hog
face_upsample_times=1
face_recog_dist_threshold=0.6
face_train_model=hog

[alpr]
alpr_use_after_detection_only=yes
alpr_service=plate_recognizer
alpr_api_type=cloud
alpr_key=!PLATEREC_ALPR_KEY
platerec_stats=no
platerec_min_dscore=0.1
platerec_min_score=0.2

[animation]
create_animation=no
animation_types='gif'
fast_gif=no
animation_width=640
animation_retry_sleep=15
animation_max_tries=3

## Monitor specific settings
[monitor-1]
# back yard
object_detection_pattern=(person|cat|dog|bird)
detection_sequence=object
wait=3

[monitor-2]
# out the back
object_detection_pattern=(person|dog|bird|car|truck|motorbike|bus)
detection_sequence=object
wait=3

[monitor-3]
# garage
object_detection_pattern=(person)
wait=3

[monitor-4]
# front door
object_detection_pattern=(person|cat|dog|bird)
detection_sequence=face,object
face_model=cnn
wait=3

[monitor-5]
# east yard
object_detection_pattern=(person|cat|dog|bird)
wait=3

[monitor-6]
# west yard
object_detection_pattern=(person|cat|dog|bird)
wait=3

[ml]

I think that’s about all I had to configure. Good luck!

Keep in mind that TPU is not GPU. TPU is Tensorflow Processing Unit, which refers to Google Coral. Tried that with my Unraid + ZM in docker setup, worked ok-ish, wanted more power + ability to run YOLO models. Couldn’t find a way to get YOLO models to run on TPU, tried various ways of converting and training, no luck. The model that was working, MobileNet SSD, wasn’t giving me the accuracy that I was used to with YOLO.

Got an EGPU + GTX 1080 attached to it and while the ML works perfectly I’m struggling for the past few days to compile FFMPEG + libav* packages with CUDA support so I can offload the FFMPEG processing to the GPU as well but no luck. Kinda thinking about trying out Shinobi in the meanwhile, until I come up with other ideas on how to tackle FFMPEG

algun problema en concreto ?

I don’t speak that language, neither does the forum.

Is this still working out on your favor? I ditched zoneminder about 2 years ago but I’m thinking I need to try it out again. Can you ‘set it and forget it’? Also, does the UI still nag the heck out of you if you’re not donating to them?

Yeah, working great for a long time… still working in the background day after day. I almost never restart the container so the update can’t occur. I just checked it and it has been running for 4 months, lol. My install never nags me, I’m not sure what you’re talking about there.

I’m more of the tinkering type because I like stuff that works “just so”. If you don’t like tinkering, you may not like ZM. I tried to buy the TPU and the company took my money but didn’t send me the device. I think they’re all backordered like graphics cards. :worried: