Zoneminder NVR 1.34 Docker by dlandon includes object vision

How can i help you to help me? I’m running home assistant in docker on my NUC, as my Zoneminder. In Zmninja desktop app i see events, should i see yolo info too? You talk about zm portal “for testing”, what i have to put there for “final use”?

if you are getting alarms / videos in your zmninja desktop app, then you should be getting them in your iOS app - if you have the app settings configured correctly.
There should be a token in the tokens.txt file. if note something isnt correct in your app.

you should start with internal IPs within your LAN and get that working before you move to external WAN networking

I don’t mind receiving alarm in my mobile app, i just want to receive it in nodered via mqttt

if i try to start the hook script manually i get:
./zm_event_start.sh: riga 24: /var/lib/zmeventnotification/bin/zm_detect.py: File o directory non esistente

That means file or directory not found

did you follow all the instructions? have you looked for the file yourself?
did you change anything from default like this:
base_data_path=/var/lib/zmeventnotification

its tempting to configure everything at once. Go back to the default .ini files and just change the basics. Then setting by setting enable things until you break it - then you know what is wrong.

I tried this method, but i get an error in the logs.

Can't open memory map file /dev/shm/zm.mmap.1, probably not enough space free: Permission denied

Edit i had quotes around my environment variables which are not needed when using docker-compose. Working now.

anyone try to upgrade to zoneminder 1.35?

I wouldn’t yet unless there is a docker for it?
I’d wait until dlandon has made the necessary changes for things like the DB etc.
I’m pretty sure the docker updates itself on reboot to?
What’s making you want to upgrade? Anything specific?

knowing the animations work in it would be nice to see.

however on 1.34, i still have not gotten the facial recognition to work yet. plate and object works great.

Did you run the train script on the faces you uploaded?
I had a few issues initially, but then used my phone to take close up passport style pictures at various angles.

Then you have to specify to use face in objectdetection.ini (globally or for each monitor)
That’s from memory, but I think the steps.

honestly, i have not found any real good documentation on it. I haven’t done anything with it because I am unsure where to start.

The docs aren’t great to be honest.
try this

The rest I just used from the default objectdetection.ini and messed around until it worked. I also only had one camera that was in a good enough spot for seeing faces, the rest were angled down too much.

sweet, right now i have my face in there so we will see when i get home. I am wondering though, if it doesn’t know the face will it put them in unknown folder or no? because it hasn’t been but i also didn’t have faces.dat before either.

anyone seeing any issue wiht my config file here? i ask because it is not using CNN. it i using is using hog

also how do i get the training script to use CNN and not hog?

# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

[general]

# Please don't change this. It is used by the config upgrade script
version=1.0

# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini


# You can now limit the # of detection process
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock

cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1

# NEW: Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)
cpu_max_lock_wait=120
tpu_max_lock_wait=120
gpu_max_lock_wait=120

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# It seems certain systems don't follow regular
# ZM conventions on install paths. This may cause 
# problems with pyzm that the hooks use to do logging
# Look at https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ZMLog.init for parameters. Default is "{}"
# You can also use this to control logging irrespective of ZM log settings
#pyzm_overrides = {'conf_path':'/etc/zm'}
pyzm_overrides={'log_level_debug':2}

# base path where ZM config files reside
# this is needed by pyzm especially if your paths are different
# default is /etc/zm
base_zm_conf_path=/etc/zm

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=!ZM_API_PORTAL

allow_self_signed=yes

# If specified, will limit detected object size to this amount of
# the total image size passed. Can help avoiding weird detections
# You can specify as % or px. Default is px
# Remember the image is resized to 416x416 internally. better
# to keep in %
#max_detection_size=95%


# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion. 
match_past_detections=yes

# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
past_det_max_diff_area=5%
models=face,yolo
# sequence of models to run for detection
detection_sequence=face,object,alpr
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=all

# If you need basic auth to access ZM 
basic_user=!ZM_USER
basic_password=!ZM_PASSWORD


# global settings for 
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# Typically best match means it will first try alarm 
# and then snapshot. If you want it the reverse way, 
# make the order s,a. Don't get imaginative here -
# s,a is the only thing it understands. Everything else
# means alarm then snapshot.
#bestmatch_order = s,a

# this is the to resize the image before analysis is done
resize=1200
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=yes

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to 
# write_image_to_zm 
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no. 
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes

# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)
# Make this 0 if you don't want to see polygons
poly_thickness=1

# If yes, will import zones automatically from monitors
#import_zm_zones=no

# If yes, will match object detections only in areas
# that ZM recorded motion. Note that the ES will only know
# the initial zones motion was triggered in before an alarm 
# was raised. If ZM adds more zones later in the course of the event,
# the ES will NOT know

#only_triggered_zm_zones=no


[remote]

# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that 
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.21:5000/api/v1

# If you enable ml_gateway, and it is down
# you can set ml_fallback_local to yes
# if you want to instantiate local object detection
# on gateway failure. Default is no
#ml_fallback_local=yes

# API/password for remote gateway
#ml_user=!ML_USER
#ml_password=!ML_PASSWORD
#ml_fallback_local=no


[object]

# this is the global detection pattern used for all monitors.
# choose any set of classes from here https://github.com/pjreddie/darknet/blob/4a03d405982aa1e1e911eac42b0ffce29cc8c8ef/data/coco.names
# for everything, make it .*
object_detection_pattern=(person|car|motorbike|bus|truck|boat|bicycle|cat|dog|bird)
#object_detection_pattern=.*

object_min_confidence=0.3


# For Google Coral Edge TPU
#object_framework=coral_edgetpu
#object_processor=tpu
#object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
#object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names



# For YoloV3 full
object_framework=opencv
object_processor=cpu # or gpu
object_config={{base_data_path}}/models/yolov3/yolov3.cfg
object_weights={{base_data_path}}/models/yolov3/yolov3.weights
object_labels={{base_data_path}}/models/yolov3/coco.names

# FOR YoloV4. 
#object_framework=opencv
#object_processor=cpu
#object_config={{base_data_path}}/models/yolov4/yolov4.cfg
#object_weights={{base_data_path}}/models/yolov4/yolov4.weights
#object_labels={{base_data_path}}/models/yolov4/coco.names


# For tiny Yolo V3
#object_framework=opencv
#object_processor=cpu #or gpu
#object_config={{base_data_path}}/models/tinyyolov3/yolov3-tiny.cfg
#object_weights={{base_data_path}}/models/tinyyolov3/yolov3-tiny.weights
#object_labels={{base_data_path}}/models/tinyyolov3/coco.names

# For tiny Yolo V4
#object_framework=opencv
#object_processor=cpu # or gpu
#object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
#object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
#object_labels={{base_data_path}}/models/tinyyolov4/coco.names


# config params for HOG
[cnn]
stride=(4,4)
padding=(8,8)
scale=1.05
mean_shift=-1

[face]

# this is the global detection pattern used for all monitors.
face_detection_pattern=(Name1|Name2|Name3)


# As of today, only dlib can be used
# Coral TPU only supports face detection
# Maybe in future, we can do different frameworks
# for detection and recognition
face_detection_framework=dlib
face_recognition_framework=dlib


# this directly will be where you store known images on a per directory basis
known_images_path={{base_data_path}}/known_faces

# if yes, then unknown faces will be stored and you can analyze them later
# and move to known_faces and retrain
save_unknown_faces=yes

# How many pixels to extend beyond the face for a better perspective
save_unknown_faces_leeway_pixels=50

# this directly is where zm_detect will store faces it could not identify
# (if save_unknown_faces is yes). You can then inspect this folder later, 
# and copy unknown faces to the right places in known_faces and retrain
unknown_images_path={{base_data_path}}/unknown_faces


# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-faces-in-an-image
# and play around

# quick overview: 
# num_jitters is how many times to distort images 
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be 

face_num_jitters=1
face_model=cnn
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
face_train_model=cnn
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
unknown_face_name=invader

[alpr]


# this is the global detection pattern used for all monitors.
#alpr_detection_pattern=(licenseplate1|licenseplate2|licenseplate3)

# keep this to yes. no mode is not supported today
alpr_use_after_detection_only=yes


# plate_recognizer, open_alpr, open_alpr_cmdline
alpr_service=plate_recognizer

# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=https://localhost:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=no
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-supported
platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2


# ----| If you are using openALPR web service |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3


# ----| If you are using openALPR command line |-----

# Before you do any of this, make sure you have openALPR
# compiled and working properly as per http://doc.openalpr.com/compiling.html
# the alpr binary needs to be operational and capable of detecting plates

# Note this is not really very accurate unless you 
# have a camera directly with a good view of the palates
# the cloud based API service is far more accurate

#openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

#openalpr_cmdline_params=-j
#openalpr_cmdline_min_confidence=0.3





# This section gives you an option to get brief animations 
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected

[animation]
# Seems like GIF/MP4 animations only
# work in IOS. Too bad.

# NOTE: Animation ONLY works with ZM 1.35 master as of Mar 16, 2020
# You also require zmNinja 1.3.91 or above
# If you are not running that version, animation will not work
# Animation frames will be created, but they won't be pushed to your device

# If yes, object detection will attempt to create 
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
# Note that if you use mp4, the thumbnail that shows 
# with push notifications may look transparent. My guess
# is this is related to how the video is being formed
# in ZM as it is a partial video when we process it

# Note that if you use mp4, you need to change the picture_url
# in zmeventnotification.ini to objdetect_mp4. When you use objdetect,
# a GIF file is checked and if not, the image is returned. MP4 is not
# returned, as they are not playable inside an HTML img tag

animation_types='gif'

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)
fast_gif=no

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds) 
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not 
# getting an image at all (timed out)
animation_width=640

# animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it 
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=3

## --- MONITOR SPECIFIC CHANGES FROM HERE ON 
## --- Every param above can be repeated inside
## --- monitor specific sections and they will override
## --- settings above

## Monitor specific settings
#
# - Format:  [monitor-<mid>]

# Examples:
# Let's assume your monitor ID is 999
[Monitor-1]
# my driveway
match_past_detections=yes
wait=5
object_detection_pattern=(person|car|motorbike|bus|truck|boat|bicycle|cat|dog|face)
detection_sequence=object,alpr
resize=no


#[Monitor-2]
# my kitchen
#match_past_detections=no
#wait=5
#object_detection_pattern=(person|car|motorbike|bus|truck|boat|bicycle|cat|dog)
#resize=no
#detection_sequence=object,alpr

#[monitor-3]
# my door
#match_past_detections=no
#wait=5
#object_detection_pattern=(person|car|motorbike|bus|truck|boat|bicycle|cat|dog|face)
#resize=no
#detection_sequence=object,alpr

[monitor-3]
#doorbell
detect_pattern=(person)
# try face, if it works, don't do yolo
detection_mode=first
models=face,yolo
frame_id=bestmatch
resize=600
face_model=cnn
wait=5
object_detection_pattern=(person|car|motorbike|bus|truck|boat|bicycle|cat|dog|face)

[monitor-2]
#doorbell
detect_pattern=(person)
# try face, if it works, don't do yolo
detection_mode=first
models=face,yolo
frame_id=bestmatch
resize=600
face_model=cnn
wait=5
object_detection_pattern=(person|face|car|motorbike|bus|truck|boat|bicycle|cat|dog|chair|diningtable|oven)
match_past_detections=yes

i cant see anything off - here are my settings:

[hog]
stride=(4,4)
padding=(8,8)
scale=1.05
mean_shift=-1
#doorpi
detect_pattern=(person)
#detect_pattern=.*
# try face, if it works, don't do yolo
detection_mode=first
models=face,yolo
frame_id=bestmatch
# try diff. sizes. In my case, 600 was enough
resize=600
# My doorbell camera needs more accurate face detection
# cnn did a much better job than HOG, but its _much_ slower
face_model=cnn
face_train_model=cnn
face_recog_dist_threshold=0.6
match_past_detections=no

I also see you have detect_pattern=(face) in one of the monitors - i’m now past the edge of my tech knowledge - maybe check on the zoneminder forum. The team responds fairly quickly usually (day or 2)

Thought i’d share my NodeRed Flow.

Zoneminder has a great feature to ignore previously detected items (and you can set the tolerance) - all other platforms i’ve tried I had to do some calculations to see if the object count has gone up by n+1.

Now I get an image, and an alert “1 Person and 2 Cars Detected at the Front” Cool :slight_smile:

Here is the flow:

[{"id":"d04919c2.045388","type":"comment","z":"bedbde5d.cc286","name":"Camera Motion","info":"","x":233.01303482055664,"y":483.0104236602783,"wires":[]},{"id":"343364d4.d1539c","type":"mqtt in","z":"bedbde5d.cc286","name":"","topic":"zoneminder/#","qos":"2","datatype":"auto","broker":"ea79b2b4.0fbd2","x":221.4274139404297,"y":577.5219955444336,"wires":[["6b20271f.493048"]]},{"id":"6b20271f.493048","type":"json","z":"bedbde5d.cc286","name":"","property":"payload","action":"","pretty":false,"x":419.4274139404297,"y":578.5220317840576,"wires":[["75d09088.cae7b"]]},{"id":"e7301570.5fc238","type":"template","z":"bedbde5d.cc286","name":"iOS","field":"payload","fieldType":"msg","format":"json","syntax":"mustache","template":"{ \"domain\": \"notify\",\n  \"service\": \"ios_device\",\n  \"data\": {\n    \"title\": \"Motion at the {{payload.monitor}}\",\n    \"message\": \"{{detection}}\",\n    \"data\": {\n        \"attachment\": \n               {\n           \"url\": \"https://sub.domain.com/zm/index.php?username=USER&password=PASS!&action=login&view=image&eid={{payload.eventid}}&fid=objdetect&width=600\",\n           \"content-type\": \"jpeg\",\n           \"hide-thumbnail\": false\n               }\n          }\n  }  \n}","output":"str","x":1179.7610549926758,"y":415.85533332824707,"wires":[["9d74f023.2b94"]]},{"id":"9d74f023.2b94","type":"api-call-service","z":"bedbde5d.cc286","name":"Notify","server":"b89bdbfd.f73988","version":1,"service_domain":"","service":"","entityId":"","data":"","dataType":"json","mergecontext":"","output_location":"payload","output_location_type":"msg","mustacheAltTags":false,"x":1513.7608375549316,"y":416.85535621643066,"wires":[[]]},{"id":"e48eee9.0a2481","type":"switch","z":"bedbde5d.cc286","name":"","property":"payload.monitor","propertyType":"msg","rules":[{"t":"eq","v":"Living Room","vt":"str"},{"t":"eq","v":"Front","vt":"str"},{"t":"eq","v":"Down","vt":"str"},{"t":"eq","v":"Back","vt":"str"},{"t":"eq","v":"Side","vt":"str"},{"t":"eq","v":"Doorbell","vt":"str"},{"t":"eq","v":"Driveway","vt":"str"}],"checkall":"true","repair":false,"outputs":7,"x":1336.364845275879,"y":582.5792560577393,"wires":[["2fcf66f8.1a4fea"],["6cfb8e32.c003c"],["e185fa47.56d5b8"],["e3fffc11.cf3c3"],["87dd998a.4ac3f8"],["d0752e20.5d889"],["21e64a8c.665426"]]},{"id":"5a22ad92.e3d2d4","type":"summariser","z":"bedbde5d.cc286","name":"","input":"payload.detection","inputType":"msg","output":"count","label":"","topic":"","outputs":1,"rules":[{"field":"label","op":"group","sep":","}],"x":823.773551940918,"y":576.8657741546631,"wires":[["21454c.09310ab4"]]},{"id":"2fcf66f8.1a4fea","type":"ha-entity","z":"bedbde5d.cc286","name":"livingroom detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"livingroom_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"},{"property":"person","value":"person","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1550.7736892700195,"y":466.86573219299316,"wires":[[]]},{"id":"6cfb8e32.c003c","type":"ha-entity","z":"bedbde5d.cc286","name":"front detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"front_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"},{"property":"person","value":"person","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1538.7736892700195,"y":529.8657321929932,"wires":[[]]},{"id":"87dd998a.4ac3f8","type":"ha-entity","z":"bedbde5d.cc286","name":"side detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"side_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"person","value":"person","valueType":"msg"},{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1527.7737350463867,"y":698.8657665252686,"wires":[[]]},{"id":"e3fffc11.cf3c3","type":"ha-entity","z":"bedbde5d.cc286","name":"back detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"back_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"person","value":"person","valueType":"msg"},{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1534.7736892700195,"y":640.8657321929932,"wires":[[]]},{"id":"21e64a8c.665426","type":"ha-entity","z":"bedbde5d.cc286","name":"driveway detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"driveway_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"person","value":"person","valueType":"msg"},{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1537.7738494873047,"y":815.8658485412598,"wires":[[]]},{"id":"e185fa47.56d5b8","type":"ha-entity","z":"bedbde5d.cc286","name":"down detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"down_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"person","value":"person","valueType":"msg"},{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1542.773696899414,"y":583.8657360076904,"wires":[[]]},{"id":"d0752e20.5d889","type":"ha-entity","z":"bedbde5d.cc286","name":"doorbell detected","server":"b89bdbfd.f73988","version":1,"debugenabled":false,"outputs":1,"entityType":"sensor","config":[{"property":"name","value":"doorbell_detection"},{"property":"device_class","value":""},{"property":"icon","value":""},{"property":"unit_of_measurement","value":""}],"state":"total","stateType":"msg","attributes":[{"property":"person","value":"person","valueType":"msg"},{"property":"car","value":"car","valueType":"msg"},{"property":"dog","value":"dog","valueType":"msg"}],"resend":true,"outputLocation":"","outputLocationType":"none","inputOverride":"allow","x":1544.7737121582031,"y":756.865795135498,"wires":[[]]},{"id":"75d09088.cae7b","type":"change","z":"bedbde5d.cc286","name":"rename Cameras","rules":[{"t":"change","p":"payload.monitor","pt":"msg","from":"1","fromt":"str","to":"Living Room","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"2","fromt":"str","to":"Down","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"3","fromt":"str","to":"Back","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"4","fromt":"str","to":"Side","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"5","fromt":"str","to":"Doorbell","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"6","fromt":"str","to":"Front","tot":"str"},{"t":"change","p":"payload.monitor","pt":"msg","from":"7","fromt":"str","to":"Driveway","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":610.3673629760742,"y":578.5584392547607,"wires":[["5a22ad92.e3d2d4"]]},{"id":"21454c.09310ab4","type":"function","z":"bedbde5d.cc286","name":"","func":"if (msg.count.person) {\n    \nif (msg.count.person === 1 ) {\n    msg.person = \"1 Person detected\"\n                     }\nif (msg.count.person >= 2 ) {\n    msg.person = msg.count.person + \" People detected\"\n                     }\n}\n\nif (msg.count.car) {\n    \nif (msg.count.car == 1 ) {\n    msg.car = \"1 Car detected\"\n                     }\nif (msg.count.car >= 2 ) {\n    msg.car = msg.count.car + \" Cars detected\"\n                     }\n}\n\nif ((msg.count.person) && (msg.count.car)) {\n    msg.detection = msg.person + \" and \" + msg.car                     \n    msg.total = msg.count.person + msg.count.car\n    \n} \n\nif ((msg.count.person) && (!msg.count.car)) {\n    msg.detection = msg.person \n    msg.total = msg.count.person\n} \n\nif ((!msg.count.person) && (msg.count.car)) {\n    msg.detection = msg.car   \n    msg.total = msg.count.car\n} \nreturn msg;\n\n","outputs":1,"noerr":0,"initialize":"","finalize":"","x":1009.7736511230469,"y":576.8657341003418,"wires":[["e48eee9.0a2481","e7301570.5fc238"]]},{"id":"ea79b2b4.0fbd2","type":"mqtt-broker","name":"","broker":"192.168.1.202","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthPayload":"","closeTopic":"","closeQos":"0","closePayload":"","willTopic":"","willQos":"0","willPayload":""},{"id":"b89bdbfd.f73988","type":"server","name":"Home Assistant","legacy":false,"addon":false,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":true,"cacheJson":true}]

Hello,

Was anybody able to solve the problem with reverse proxy? I am using traefik v2 and having issue with loading the page properly… even streams are always looking at /zm/ segment while traefik is handling it in the background…

traefik:

http:

  routers:
    zoneminder.example.com:
      priority: 1
      entryPoints:
        - websecure
      rule: Host(`cam.example.com`)
      service: zoneminder.example.com
      middlewares:
        - zoneminder.example.com+addPrefix
        #- zoneminder.example.com+replacePathRegex

  services:
    zoneminder.example.com:
      loadBalancer:
        passHostHeader: true
        servers:
          - url: 'http://192.168.255.12:50191'

  middlewares:
    zoneminder.example.com+addPrefix:
      addPrefix:
        prefix: /zm
    zoneminder.example.com+replacePathRegex:
      replacePathRegex:
        regex: "/zm/index(.+)"
        replacement: "/index$1"

Appreciate, Michal