cool! I am busy trying the docker option. I used to use iSpy many years ago, but found that when you start adding >5 camera’s it used to start to wig out a bit… Can you comment on cacmera numbers and performance at all?
it snuck in in .110
I tried AgentDVR on a Intel i7 (3.2ghz), 16GB system and it ran fairly well with my 13 cameras. Granted, I was only consuming the substreams (640x480@15FPS) instead of the 5mp streams.
I am still working on setting it up as a NVR, but other things have taken precedence.
what was the cpu running at? Were these H264 / H265 camera’s? was it a dedicated box?
CPU is typically right around 25%-35% with substreams. It’s on a dedicated(ish) box and the cameras are Reolink 410/520s POE cameras wired into a 16 port POE switch. The server also has PerconaDB and InfluxDB running on it as well.
This is it running right now:
Nice done! I have been struggling to get object detection to work in zoneminder. I’m using dlandon docker image too. You wouldn’t mind sharing your docker run config and your zoneminder config?
@patrikron, sure. The hardest part of the dlandon’s stuff is dialing in the zmeventnotification because there are a lot of options. Also, I noticed, since I’ve been using it for months now, that dlandon tends to update the thing periodically and the image is designed to update itself on restart. This wouldn’t be a problem normally, because I like stuff that keeps itself up-to-date, however, in this case I’d like it to remain static until I’m ready to pull the trigger on an update and keep my known-good working system working. Unfortunately, since the update code is baked into the image I have no control over it.
The last time I restarted the system, it pulled in an update I didn’t expect and my cameras started to detect objects that they never had before (like luggage, beds, etc.). I had explicitly removed those from the decision making in my config files but with the update the ini-file structure had changed enough that my old config file didn’t apply its values to the new expected input so it took the defaults for objects to detect which is wide open. So, I had to go back in and re-tweak all the definitions so they aligned with the new expected inputs.
I’ve found that ZM takes a while to dial in but once you get it properly configured, you’re good to go for a long time. We just need to get dlandon’s stuff on the same page. but I ain’t complaining to him cause he’s done great work here and I’m just leaching it.
Reference this link a lot: https://zmeventnotification.readthedocs.io/en/latest/guides/install.html#update-the-configuration-files don’t worry about the “installation” stuff but pay attention to the configuration instructions at the bottom and READ ALL THE COMMENTS IN THE CONFIG FILES.
Here’s my zmeventnotification.ini
, I removed all the comments (which you should read to tweak it properly). As you can see, I don’t use SSL on my ZM server because I don’t allow it to the internet but I do have authentication enabled for my local intranet:
# Configuration file for zmeventnotification.pl
[general]
secrets = /etc/zm/secrets.ini
base_data_path=/var/lib/zmeventnotification
use_escontrol_interface=no
escontrol_interface_file=/var/lib/zmeventnotification/misc/escontrol_interface.dat
escontrol_interface_password=!ESCONTROL_INTERFACE_PASSWORD
restart_interval = 0
[network]
port = 9001
[auth]
enable = yes
timeout = 20
[push]
use_api_push = no
api_push_script=/var/lib/zmeventnotification/bin/pushapi_pushover.py
[fcm]
enable = yes
use_fcmv1 = yes
replace_push_messages = no
token_file = {{base_data_path}}/push/tokens.txt
date_format = %I:%M %p, %d-%b
[mqtt]
enable = yes
server = !HASSIO_SERVER
username = !MQTT_USERNAME
password = !MQTT_PASSWORD
retain = no
[ssl]
enable = no
cert = !ES_CERT_FILE
key = !ES_KEY_FILE
[customize]
console_logs = yes
es_debug_level = 2
event_check_interval = 5
monitor_reload_interval = 300
read_alarm_cause = yes
tag_alarm_event_id = yes
use_custom_notification_sound = no
include_picture = yes
send_event_end_notification = no
picture_url = !ZMES_PICTURE_URL
picture_portal_username=!ZM_USER
picture_portal_password=!ZM_PASSWORD
use_hooks = yes
[hook]
event_start_hook = '{{base_data_path}}/bin/zm_event_start.sh'
event_end_hook = '{{base_data_path}}/bin/zm_event_end.sh'
event_start_notify_on_hook_success = all
event_start_notify_on_hook_fail = none
event_end_notify_on_hook_success = fcm,web,api
event_end_notify_on_hook_fail = none
event_end_notify_if_start_success = yes
use_hook_description = yes
keep_frame_match_type = yes
hook_pass_image_path = yes
Make sure you edit your secrets.ini
file to point to the correct locations. If you use SSL on your ZM, your config will look quite a bit different than mine. Pay attention to the use_hooks
and console_logs
options and use the zoneminder log function to see what’s happening. This was invaluable to me because I was able to see the next thing that needed to be fixed in the config that way.
Here’s my objectconfig.ini
file, my six cameras are listed at the bottom and the types of detection overrides I’ve defined. You can see in this file that I don’t have hardware ML enabled but I have a plan to upgrade the graphics card in this box so I can crunch a bunch of video (for cameras and plex) and I’m planning on getting an nvidia card that has the ability to forward TPU data to a docker. I don’t use alpr nor hog in my detection scheme.
# Configuration file for object detection
# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section
[general]
version=1.1
secrets = /etc/zm/secrets.ini
cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1
cpu_max_lock_wait=120
tpu_max_lock_wait=120
gpu_max_lock_wait=120
base_data_path=/var/lib/zmeventnotification
pyzm_overrides={'log_level_debug':2}
base_zm_conf_path=/etc/zm
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD
api_portal=!ZM_API_PORTAL
allow_self_signed=yes
match_past_detections=yes
past_det_max_diff_area=5%
detection_sequence=object,face
detection_mode=all
frame_id=bestmatch
delete_after_analyze=yes
write_debug_image=no
write_image_to_zm=yes
show_percent=yes
poly_color=(255,255,255)
poly_thickness=2
ml_user=!ML_USER
ml_password=!ML_PASSWORD
[object]
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=opencv
object_processor=cpu # or gpu
object_config={{base_data_path}}/models/yolov4/yolov4.cfg
object_weights={{base_data_path}}/models/yolov4/yolov4.weights
object_labels={{base_data_path}}/models/yolov4/coco.names
[hog]
stride=(4,4)
padding=(8,8)
scale=1.05
mean_shift=-1
[face]
face_detection_framework=dlib
face_recognition_framework=dlib
known_images_path={{base_data_path}}/known_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
unknown_images_path={{base_data_path}}/unknown_faces
face_num_jitters=1
face_model=hog
face_upsample_times=1
face_recog_dist_threshold=0.6
face_train_model=hog
[alpr]
alpr_use_after_detection_only=yes
alpr_service=plate_recognizer
alpr_api_type=cloud
alpr_key=!PLATEREC_ALPR_KEY
platerec_stats=no
platerec_min_dscore=0.1
platerec_min_score=0.2
[animation]
create_animation=no
animation_types='gif'
fast_gif=no
animation_width=640
animation_retry_sleep=15
animation_max_tries=3
## Monitor specific settings
[monitor-1]
# back yard
object_detection_pattern=(person|cat|dog|bird)
detection_sequence=object
wait=3
[monitor-2]
# out the back
object_detection_pattern=(person|dog|bird|car|truck|motorbike|bus)
detection_sequence=object
wait=3
[monitor-3]
# garage
object_detection_pattern=(person)
wait=3
[monitor-4]
# front door
object_detection_pattern=(person|cat|dog|bird)
detection_sequence=face,object
face_model=cnn
wait=3
[monitor-5]
# east yard
object_detection_pattern=(person|cat|dog|bird)
wait=3
[monitor-6]
# west yard
object_detection_pattern=(person|cat|dog|bird)
wait=3
[ml]
I think that’s about all I had to configure. Good luck!
Keep in mind that TPU is not GPU. TPU is Tensorflow Processing Unit, which refers to Google Coral. Tried that with my Unraid + ZM in docker setup, worked ok-ish, wanted more power + ability to run YOLO models. Couldn’t find a way to get YOLO models to run on TPU, tried various ways of converting and training, no luck. The model that was working, MobileNet SSD, wasn’t giving me the accuracy that I was used to with YOLO.
Got an EGPU + GTX 1080 attached to it and while the ML works perfectly I’m struggling for the past few days to compile FFMPEG + libav* packages with CUDA support so I can offload the FFMPEG processing to the GPU as well but no luck. Kinda thinking about trying out Shinobi in the meanwhile, until I come up with other ideas on how to tackle FFMPEG
algun problema en concreto ?
I don’t speak that language, neither does the forum.
Is this still working out on your favor? I ditched zoneminder about 2 years ago but I’m thinking I need to try it out again. Can you ‘set it and forget it’? Also, does the UI still nag the heck out of you if you’re not donating to them?
Yeah, working great for a long time… still working in the background day after day. I almost never restart the container so the update can’t occur. I just checked it and it has been running for 4 months, lol. My install never nags me, I’m not sure what you’re talking about there.
I’m more of the tinkering type because I like stuff that works “just so”. If you don’t like tinkering, you may not like ZM. I tried to buy the TPU and the company took my money but didn’t send me the device. I think they’re all backordered like graphics cards.
When logging into zoneminder to view the streams. It puts a little “pay me” popup somewhere on the screen.
I don’t mind tinkering, I only removed it because it was hogging resources and it increased the temperature of my NUC by 30ºF. The resource hogging was tensorflow in HA, so I’m wondering if the object detection in this container is better.
Weird, I’ve never seen that. Maybe it’s disabled in the version I’m using.
ZM can be a resource hog if you don’t knock down the camera feed quality/rate and/or limit the number of feeds you’re crunching. My server is serious overkill for what I’m doing with it but it works very well.
It’s an E5-2620 (6-core) on a Supermicro X9SRA, I’ve got 128GB of ECC ram so all my cameras have plenty of shared memory to use for analysis. It’s usually at 40% CPU just crunching the ZM data which is a load of about 4. I also run Plex from it as it doubles as my media server. The headroom gets eaten up my transcoding when Plex kicks in. This is why I need both a TPU and a GPU for this beast.
I’m not sure I’d use a NUC to handle more than a couple of cameras… being in such a small case means temps are bound to climb.
What’s the rough cost for a server like that?
After about 5 years i ditched zm due to the recent changes in zm. My dedicated proliant dual core xeon was simply maxing out… i now run agent dvr… much happier
We’ll, when I bought them in 2012 it was about $750 for both the cpu and the motherboard (just looked up my purchase info from NewEgg). The memory was a gift from a good friend. I put about 10TB of spinning disks in it for media and whatnot (for mythtv if you remember that software, I used it from 2004).
It was all installed in a very nice HTPC case out in the living room under the TV. I recently relocated it to my basement since Plex allows you to stream video so easily. With the disks and case it is probably ~$1,500 investment. At this point, I feel like I’ve got my money’s worth.
I consider myself an original computer nerd so I do most of this type of stuff for fun…
Docker installation? I had moved from docker to official installation method on debian, temperature decreased about 15C, also the fan noise has gone.
I reduced the cpu use also with ZMtrigger: https://wiki.zoneminder.com/ZMTrigger,
motion sensor triggers Telnet switch in HA wich triggers recording and object detection in zoneminder, so zoneminder dont do motion detection all time.
Interesting, yes. I was using the docker install. I’ll have to peek at the official installation. I tried to avoid doing anything on the OS but maybe I’ll have to go that route.
Well, dlandon broke his last update in January 2021 and I tried to update to it this weekend and it broke my ZM installation. He’s not releasing anymore updates (deprecated) and it looks like he’s trying to monetize his work going forward, which is fair. His additions to ZM were pretty amazing, enabling object detection with TPU/GPU support.
Because this blows a hole in my security system I’m taking this opportunity to try moving to a new platform instead of trying to stand up another new instance of ZM. I’ve spent the morning evaluating Shinobi. It looks pretty powerful, I especially like the flexible detection schemes but it seems that the developer is very interested in subscription-based software. He built in a nag screen that pops up every week if you don’t pay him regularly, so I’m going to avoid it.
I’m now evaluating motioneye and the support for HA is better than the other two but the motion detection scheme seems pretty weak (or maybe I’m just not used to it yet). It does have a nice underlying bonus when compared to ZM and Shinobi.
ZM is a collection of perl scripts and wrappers that all execute at the same time doing their various tasks. Shinobi is similar, it has a process for each camera. Motioneye on the other hand has a single executable that threads out all the processing so I can see one process that consumes around 310% CPU on my server. That’s way nicer than having 8-15 processes that all consume 20%-40% CPU. By the way, considering CPU usage they are all surprisingly comparable in the amount of demand for 7 cameras.
I now have a Google Coral TPU and 30-series GPU on the way so I can enable tensorflow and NVENC. I’m very excited to see how those additions impact the CPU load for both Plex and my camera security suite.