and just for posterity this is for the event video:](https://url/zm/index.php?username=blah&password=blah&action=login&view=view_video&eid=eventID#&fid=snapshot&width=600)
I noticed the ini files I was using were from a previous version. I’ve made modifications to the defaults and re-created the ini files, hopefully this will help
ah yeah i ended up starting again at 1.34 as I couldnt get all the things working in 1.32
FYI - the Eventserver has been updated to allow customization of the JSON as per this Github:
Ya I got it working seems some of the models didn’t download since I was just upgrading from the old docker image. Got it all sorted. I’ll wait for the container to get update with the new JSON string stuff.
Nice to see what you guys are doing. I have integrated Zoneminder alerts with HA via AppDaemon. Working nice, I am using it to send alert text messages with image from the object detection provided by ZM-ES. Take a look at this discussion thread:
I am now working to enable GPU acceleration via OpenCV + Yolo3. I am running on unraid using the dlandon/zoneminder docker container.
Anxious to hear if anyone is running with CUDA acceleration.
FWIW
I don’t use docker but the detection supports CUDA. I am using CUDA on my system. When I find time I’ll update the code to leverage the huge performance improvements in OpenCV 4.1.2
Please note there was an issue with my tokens.txt file when using ZMninja iOS app.
The tokens file only showed one monitor and the logs wouldnt send alerts.
:1:0:ios:enabled
i edited this file to show:
1,2,3,4,5,6:0,0,0,0,0,0:ios:enabled
This enabled the remaining monitors to send alerts (in my case monitors 1,2,3,4,5 & 6).
HTH
I’ve just released 5.4.7 of the ES that leverages OpenCV 4.1.2’s CUDA DNN backend support. While your performance improvements will vary depending on what CPU and GPU you have, here are some of my performance observations on my low(-ish)-end GPU
For those looking to leverage CUDA with docker images, looks pretty straight forward, at least theoretically. Like I said, I don’t use docker.
All that being said, I am not sure if dlandon will directly add CUDA support to his docker image. If I recall, his primary audience is the unraid community and I’m not sure that supports a GPU. You should however be able to modify his docker compose and do it yourself as another option and then run either local detection (packaged in his image) or remote detection (mlapi)
Thank you for the response… looking forward to the new release. I am responding to your edits…
Here are a few notes:
- UNRAID support NVIDIA GPUs via ‘UNRAID Nvidia plugin’. There are several tutorials on how to leverage this plug for plex docker (hardware decoding for video).
- dropped into the docker container for dlandon/zoneminder to check the python modules.
opencv-contrib-python (4.2.0.32)
opencv-python (4.2.0.32)
I find this a bit confusing since I have read it should be one or the other (not both). - also did a quick check by running cv2.getBuildInformation() and see the following as unavailable:
cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev
Isn’t opencv the critical python module to provide GPU acceleration via nvidia-cuda?
I guess I will start getting up to speed on building docker images …
FWIW
- UNRAID support NVIDIA GPUs via ‘UNRAID Nvidia plugin’. There are several tutorials on how to leverage this plug for plex docker (hardware decoding for video).
Good to know. I’ve asked dlandon on slack if he plans to add support, but unless he has a GPU himself, it would be hard to test. I’ll leave it to him to decide. You may also want to ask him directly.
- dropped into the docker container for dlandon/zoneminder to check the python modules.
opencv-contrib-python (4.2.0.32)
opencv-python (4.2.0.32)
I find this a bit confusing since I have read it should be one or the other (not both).
Correct. Only opencv-contrib-python
is required. This was actually an error on my side - my old instructions suggested both which I have since changed. Also note that if you go the pip route, OpenCV will not have GPU support, so you really do need to build from source. See this.
- also did a quick check by running cv2.getBuildInformation() and see the following as unavailable:
cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev
Correct. The odd part is even if you install opencv-contrib-python
it seems to leave out certain packages. This affects software like zmMagik but not what you are using. However, this entire point is moot with pip install
because GPU support is not built in anyway. Therefore, go with source compilation
Coming to docker: I’ve let dlandon know of these issues on slack. The advantage of the pip route is it is quite fast and doesn’t need the various dev packages for source build, but the negative is you won’t get GPU support this way. Building opencv from source takes a very long time (a function of your system specs though). I’m not familiar enough with docker to know if a source build can be catered to as an additional layer/host link/whatever, but maybe some of you who are very familiar with docker can work with dlandon directly and figure out the best path.
Yep, I read the pyimagesearch article you provide in your docs. Makes sense now that you have to build from scratch given the vast set of Nvidia cards + cuda versions. Wondering if adding an option to dlandon/zoneminder to pull from an alternate pypi server would simplify this? I am in the process now of creating an internal pypi server and building opencv python package. This internal pypi server can redirect for any packages not available locally.
I think you can do a pip install --no-binary <package>
to force pip to compile against the currently installed libraries, no?
Good idea, however this python package is some complex in configuring its build they only support building without CUDA.
Seems I may be blocked. Managed to work through building opencv for python without the cuda DNN library. However when I try to build for my Quadra M4000:
CMake Error at modules/dnn/CMakeLists.txt:99 (message):
CUDA backend for DNN module requires CC 5.3 or higher. Please remove
unsupported architectures from CUDA_ARCH_BIN option.
This error is due to NVIDIA not wanting to support the older less capable GPUs.
I made some progress building a docker image that builds the needed .so file for cv2 module.
FWIW
I have all set up, notifications work, mqtt, just no object deteting. Tried to run script: ./zm_detect.py
, have also installed: opencv-contrib-python
. How to enable object detecting?
If I run
sudo bash zm_event_start.sh
usage: zm_detect.py [-h] -c CONFIG -e EVENTID [-p EVENTPATH] [-m MONITORID]
[-v] [-f FILE]
zm_detect.py: error: argument -m/--monitorid: expected one argument
and
python3 zm_detect.py -e 360
usage: zm_detect.py [-h] -c CONFIG -e EVENTID [-p EVENTPATH] [-m MONITORID]
[-v] [-f FILE]
zm_detect.py: error: the following arguments are required: -c/--config
and in logs it seem it is starting object detecting??
FORK:vhod (1), eid:422 Invoking hook on event start:/mnt/Zoneminder/hook/zm_event_start.sh 422 1 "vhod" "Forced Web" "/var/cache/zoneminder/events/1/2020-02-17/422"
I dont find ant hook_helpers
folder:
/mnt/Zoneminder/hook$ ./zm_detect.py -m 1 -e 425 -c /etc/zm
Traceback (most recent call last):
File "./zm_detect.py", line 135, in <module>
log.init(process_name='zmesdetect_'+'m'+args['monitorid'])
File "/usr/local/lib/python3.7/dist-packages/zmes_hook_helpers/log.py", line 35, in init
g.logger = wrapperLogger(name=process_name, dump_console=dump_console)
File "/usr/local/lib/python3.7/dist-packages/zmes_hook_helpers/log.py", line 10, in __init__
zmlog.init(name=name)
File "/usr/local/lib/python3.7/dist-packages/pyzm/ZMLog.py", line 131, in init
with open(f) as s:
NotADirectoryError: [Errno 20] Not a directory: '/etc/zm/zm.conf'
I’m trying to follow your guide, but i can’t find the keys folder with the ServerName file and the .key and .crt files. Where they are?
did you map the folders as listed in the guide?
if so inside the config folder is the keys
folder.
Forgive me but i’m a total linux noob. I’ve followed step by step your guide, so i’ve run the docker with your commands. I suppose that the map of folders are the “-v” option isn’t it?
yep, its a way to keep a persistent folder on your host that is surfaced into the docker container. Useful for accessing those files as well and changing them (you may already be doing this with HomeAssistant if you use docker).
the command is:
-v <local_path>:<docker_path>
so assuming your user is john
. make sure you are in your home directory:
cd ~
create the docker folder if it doesnt already exist:
mkdir docker
Create the Zoneminder
folder in the docker folder
cd docker
mkdir Zoneminder
so now you have a folder /home/john/docker/Zoneminder
this would be mapped in the docker command:
-v "/home/john/docker/Zoneminder":"/config":rw