Local realtime person detection for RTSP cameras

Hi this project was recommended to me by @robmarkcole however I’m having problems implementing the docker image I keep getting the error:

Traceback (most recent call last):
 File "detect_objects.py", line 25, in <module>
   with open('/config/config.yml') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/config/config.yml'

New to doctors so some help would be appreciated thank you

That error means your config file was not found. Can you post your docker command or compose file?

Thanks for replying
This is the last command are used

sudo docker run --rm --privileged --shm-size=1024m -v gpus=all -v /opt/frigate:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5000:5000 blakeblackshear/frigate:0.5.1-rc2

I’m assuming you mean my docker file which is

FROM ubuntu:18.04
LABEL maintainer "[email protected]"

ENV DEBIAN_FRONTEND=noninteractive
# Install packages for apt repo
RUN apt -qq update && apt -qq install --no-install-recommends -y \
    software-properties-common \
    # apt-transport-https ca-certificates \
    build-essential \
    gnupg wget unzip tzdata \
    # libcap-dev \
    && add-apt-repository ppa:deadsnakes/ppa -y \
    && apt -qq install --no-install-recommends -y \
        python3.7 \
        python3.7-dev \
        python3-pip \
        ffmpeg \
        # VAAPI drivers for Intel hardware accel
        libva-drm2 libva2 i965-va-driver vainfo \
    && python3.7 -m pip install -U wheel setuptools \
    && python3.7 -m pip install -U \
        opencv-python-headless \
        # python-prctl \
        numpy \
        imutils \
        scipy \
    && python3.7 -m pip install -U \
        Flask \
        paho-mqtt \
        PyYAML \
        matplotlib \
        pyarrow \
    && echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" > /etc/apt/sources.list.d/coral-edgetpu.list \
    && wget -q -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
    && apt -qq update \
    && echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections \
    && apt -qq install --no-install-recommends -y \
        libedgetpu1-max \
    ## Tensorflow lite (python 3.7 only)
    && wget -q https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl \
    && python3.7 -m pip install tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl \
    && rm tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl \
    && rm -rf /var/lib/apt/lists/* \
    && (apt-get autoremove -y; apt-get autoclean -y)

# get model and labels
RUN wget -q https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite -O /edgetpu_model.tflite --trust-server-names
RUN wget -q https://dl.google.com/coral/canned_models/coco_labels.txt -O /labelmap.txt --trust-server-names
RUN wget -q https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_ssd_v2_coco_quant_postprocess.tflite -O /cpu_model.tflite 


WORKDIR /opt/frigate/
ADD frigate frigate/
COPY detect_objects.py .
COPY benchmark.py .

CMD ["python3.7", "-u", "detect_objects.py"]

Config.yml:

web_port: 5000

mqtt:
  host: mqtt.server.com
  topic_prefix: frigate
  # client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
  # user: username # Optional
  #################
  ## Environment variables that begin with 'FRIGATE_' may be referenced in {}.
  ##   password: '{FRIGATE_MQTT_PASSWORD}'
  #################
  # password: password # Optional

#################
# Default ffmpeg args. Optional and can be overwritten per camera.
# Should work with most RTSP cameras that send h264 video
# Built from the properties below with:
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
#################
# ffmpeg:
#   global_args:
#     - -hide_banner
#     - -loglevel
#     - panic
#   hwaccel_args: []
#   input_args:
#     - -avoid_negative_ts
#     - make_zero
#     - -fflags
#     - nobuffer
#     - -flags
#     - low_delay
#     - -strict
#     - experimental
#     - -fflags
#     - +genpts+discardcorrupt
#     - -vsync
#     - drop
#     - -rtsp_transport
#     - tcp
#     - -stimeout
#     - '5000000'
#     - -use_wallclock_as_timestamps
#     - '1'
#   output_args:
#     - -f
#     - rawvideo
#     - -pix_fmt
#     - rgb24

####################
# Global object configuration. Applies to all cameras
# unless overridden at the camera levels.
# Keys must be valid labels. By default, the model uses coco (https://dl.google.com/coral/canned_models/coco_labels.txt).
# All labels from the model are reported over MQTT. These values are used to filter out false positives.
# min_area (optional): minimum width*height of the bounding box for the detected person
# max_area (optional): maximum width*height of the bounding box for the detected person
# threshold (optional): The minimum decimal percentage (50% hit = 0.5) for the confidence from tensorflow
####################
objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 5000
      max_area: 100000
      threshold: 0.5

cameras:
  back:
    ffmpeg:
      ################
      # Source passed to ffmpeg after the -i parameter. Supports anything compatible with OpenCV and FFmpeg.
      # Environment variables that begin with 'FRIGATE_' may be referenced in {}
      ################
      input: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
      #################
      # These values will override default values for just this camera
      #################
      # global_args: []
      # hwaccel_args: []
      # input_args: []
      # output_args: []
    
    ################
    ## Optionally specify the resolution of the video feed. Frigate will try to auto detect if not specified
    ################
    # height: 1280
    # width: 720

    ################
    ## Optional mask. Must be the same aspect ratio as your video feed.
    ## 
    ## The mask works by looking at the bottom center of the bounding box for the detected
    ## person in the image. If that pixel in the mask is a black pixel, it ignores it as a
    ## false positive. In my mask, the grass and driveway visible from my backdoor camera 
    ## are white. The garage doors, sky, and trees (anywhere it would be impossible for a 
    ## person to stand) are black.
    ## 
    ## Masked areas are also ignored for motion detection.
    ################
    # mask: back-mask.bmp

    ################
    # Allows you to limit the framerate within frigate for cameras that do not support
    # custom framerates. A value of 1 tells frigate to look at every frame, 2 every 2nd frame, 
    # 3 every 3rd frame, etc.
    ################
    take_frame: 1

    ################
    # Configuration for the snapshots in the debug view and mqtt
    ################
    snapshots:
      show_timestamp: True

    ################
    # Camera level object config. This config is merged with the global config above.
    ################
    objects:
      track:
        - person
      filters:
        person:
          min_area: 5000
          max_area: 100000
          threshold: 0.5

I’ve tried both ways with no success.

Is config.yaml at /opt/frigate/config.yaml? Make sure the filename is all lowercase. Also, I recommend using stable, not 0.5.1-rc2.

Thanks, had to manually move config.YML file to /opt/frigate/config.yaml directory but now I have this error

os@os:/opt$ sudo sudo docker run --rm --privileged --shm-size=1024m -v gpus=all -v/opt/frigate:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5001:5001 blakeblackshear/frigate:stable
Traceback (most recent call last):
  File "detect_objects.py", line 361, in <module>
    main()
  File "detect_objects.py", line 164, in main
    client.connect(MQTT_HOST, MQTT_PORT, 60)
  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 937, in connect
    return self.reconnect()
  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 1071, in reconnect
    sock = self._create_socket_connection()
  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 3522, in _create_socket_connection
    return socket.create_connection(addr, source_address=source, timeout=self._keepalive)
  File "/usr/lib/python3.7/socket.py", line 707, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/lib/python3.7/socket.py", line 752, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

After running the command

sudo sudo docker run --rm --privileged --shm-size=1024m -v gpus=all -v/opt/frigate:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5001:5001 blakeblackshear/frigate:stable

You need to setup a real MQTT server and update your config.

Question: best.jpg works very nice, but due to the high resolution its a bit heavy to send. It takes a long time to upload. I tried to add parameters to the url, but that doesn’t work. Is there a lower resolution still available? If not I’ll try to resize before sending but that takes time as well :slight_smile:

There isn’t a lower resolution available. If you open an issue on GH, I can look at adding it. Should be simple enough.

1 Like

Hey @blakeblackshear, I have a camera on my driveway that I’d like to detect cars and people on. I’d like person detection to happen across the entire frame, but I’d like car detection to be masked (to avoid where my car is parked). Right now I have two separate camera entries in my frigate config for the driveway camera, one only for persons and the other only for cars, with a mask. This is working fine for me, but how difficult would it be to implement a per-object mask? I can open a Github issue for tracking if you have this in mind for the future. Or is there another way you would recommend achieving what I’m looking for?

Thanks!

Several others have a similar use case. It’s already on my list.

2 Likes

just got this setup yesterday now that my Coral TPU came in. seems to be working well on 0.5.1 had a couple times where 1 cam seemed to lockup in frigate. however, that one is connected to a wireless media bridge because i dont have full ethernet to that side of the house, so there is a little more latency. i just upped the timeout to see if that corrects the issue.

so far i’m impressed by the (lackof) CPU consumption. i’ve processing 2 720p feeds and one 2K 180degree feed

does frigate use the coral TPU by default? i don’t see anything in the config examples for coral. i’m assuming so from the example docker config passing the USB bus through, and i’m not seeing my CPU spike too much (5-10% more than before frigate was running).
running in docker on an i5 w/ 27GB RAM and the coral connected via USB. i bumbed up the shm_size to 2GB as i saw i was using around 700MB before i made the adjustments and i have plenty to spare.

and if it’s already leveraging coral, does it make sense to enable quicksync acceleration as well? or does frigate do only one or the other?

It does use the Coral TPU by default – If it is not detected you will see this message in your Docker logs:

No EdgeTPU detected. Falling back to CPU.

I’m running on a i7 and I enable QuickSync – it helps offload the motion detection.

1 Like

The Coral only runs the object detection AI models. QuickSync reduces the CPU usage to decode the h264 video. If you can use both, that is ideal.

1 Like

I see. I noticed after allocating more memory, the system is being more stable and taking advantage of it. Also noticed the cpu spike with a lot of activity out front here at the end of the day. Will enable quick sync and grab some snapshots of before and after.

i think i would need to pass the /dev/dri/renderD128 through docker right? like below?

devices:
  - /dev/dri/renderD128:/dev/dri/renderD128

That’s what works for me. https://github.com/blakeblackshear/frigate/blob/b345571a6387c5b6b1bc34504792539710691805/docs/DEVICES.md#hardware-acceleration

I suspect my masks are ignored. How can I check if it really used?

Make a solid black (must be pure black) image the same size as your camera resolution. That should ignore everything.

I am not familiar with hardware acceleration/quick sync. Everything is working without enabling it. When I tried on one of my 2 cameras, it didn’t work. It kept restarting and process for 2nd camera. Perhaps I didn’t use proper syntax or arguments. Can any one help. I believe my cpu supports the quick sync.

model name : Intel® Core™ i7-4600U CPU @ 2.10GHz

  garage:
    ffmpeg:
      input: rtsp://xxx:[email protected]:554//h264Preview_01_sub

      hwaccel_args:
        - -hwaccel
        - vaapi
        - -hwaccel_device
        - /dev/dri/renderD128
        - -hwaccel_output_format
        - yuv420p

    take_frame: 1
    ################

this is undoubtedly the best and most complete recognition project that I know, it is seen that there are many hours of work behind, thank you very much for your dedication and knowledge, for me it has become something essential

2 Likes

Here here! These days it’s almost like an appliance. With a bit better false positive rejection or better model and ability to have multiple masks and object per cameras, I’ll be able to just install it and leave it.

1 Like