Local realtime person detection for RTSP cameras

You can’t. Still a single mask per camera.

1 Like

I figured out my mask issue. It was simultaneous object detection. If one of the objects is within the masked detection zone but the other is not both will still be reported.

1 Like

Bummer, so I can’t define multiple camera entries against the same actual camera stream with different objects and mask?

Wanted to give an update. The dev image has been working great with my Reolink cameras for the last 10 days. No hang ups. Thanks!

1 Like

If you have enough resources to analyze a camera multiple times, that will work as a workaround until I add the functionality.

1 Like

OK, so a lot of this is because I am a bit of a newb, and frankly don’t know what I am doing. Please be kind…

I managed to install the docker container with no major issues, it appears to be running if I check Portainer. I am running docker on Ubuntu 20.04 desktop. I have added the config file and edited the host, username and password, as well as updated the camera stream to my Reolink cam. Other than that, I haven’t done much. I am not able to even view the logs, let alone a stream, is there something rudimentary I may have missed? If I look at the container logs I am getting the below errors.


Traceback (most recent call last):

  File "detect_objects.py", line 252, in <module>

    main()

  File "detect_objects.py", line 117, in main

    client.connect(MQTT_HOST, MQTT_PORT, 60)

  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 937, in connect

    return self.reconnect()

  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 1071, in reconnect

    sock = self._create_socket_connection()

  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 3522, in _create_socket_connection

    return socket.create_connection(addr, source_address=source, timeout=self._keepalive)

  File "/usr/lib/python3.7/socket.py", line 728, in create_connection

    raise err

  File "/usr/lib/python3.7/socket.py", line 716, in create_connection

    sock.connect(sa)

socket.timeout: timed out

I’m guessing, by the looks of this, that you dont have your mqtt host set correctly?

So here’s what I edited it to. Password for now is weak, I will change it once I get this working:

web_port: 5001

mqtt:
  host: 192.168.2.98
  topic_prefix: frigate
  # client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
  user: mqtt
  #################
  ## Environment variables that begin with 'FRIGATE_' may be referenced in {}.
  ##   password: '{FRIGATE_MQTT_PASSWORD}'
  #################
  password: mqtt

can you download something like MQTT Explorer and test a connection from your local machine, just to be sure your username/password and host are correct?

Yeah I did that already and it connects no issue.

sorry man, I’m out of ideas. hopefully somebody else will chime in.

Yeah, by all accounts you are probably right, but for the life of me I am unable to see the difference between the connection in MQTT Explorer and what I put in my config file. Anyway, will look again in the week.

Your error says connection timed out so it’s a networking/firewall issue not a authentication issue.

So I disabled my firewall, and now I get the following, and I still cannot access the debug logs or camera:


On connect called

/arrow/cpp/src/plasma/store.cc:1226: Allowing the Plasma store to use up to 0.4GB of memory.

/arrow/cpp/src/plasma/store.cc:1253: Starting object store with directory /dev/shm and huge page support disabled

/arrow/cpp/src/plasma/store.cc:1270: System memory request exceeds memory available in /dev/shm. The request is for 400000000 bytes, and the amount available is 60397977 bytes. You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you may need to pass an argument with the flag '--shm-size' to 'docker run'.

Traceback (most recent call last):

  File "detect_objects.py", line 252, in <module>

    main()

  File "detect_objects.py", line 147, in main

    'fps': mp.Value('d', float(config['fps'])),

KeyError: 'fps'

You have an error in your config on the fps parameter.

You are a godsend, thanks! Got it working, as you suggested, just an error in the config file

Wow just stumbled upon this today, so happy it is working! I wanted to share my config that I got working. I’m running a k3s Kubernetes cluster and got Frigate working it with no issues. I’m using CPU processing as I do not have a Coral (yet)

So in case anyone else wants to do this, here is the simple setup:

# --------------------------------------------------------------------------------
# Namespace
---
apiVersion: v1
kind: Namespace
metadata:
    name: frigate

# --------------------------------------------------------------------------------
# Config
---
apiVersion: v1
kind: ConfigMap
metadata:
    name: frigate-config
    namespace: frigate
data:
    config.yml: |
      web_port: 5000

      mqtt:
        host: hassio.lab.local
        topic_prefix: frigate
        # client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
        user: user
        #################
        ## Environment variables that begin with 'FRIGATE_' may be referenced in {}.
        ##   password: '{FRIGATE_MQTT_PASSWORD}'
        #################
        password: password

      #################
      # Default ffmpeg args. Optional and can be overwritten per camera.
      # Should work with most RTSP cameras that send h264 video
      # Built from the properties below with:
      # "ffmpeg" + global_args + input_args + "-i" + input + output_args
      #################
      # ffmpeg:
      #   global_args:
      #     - -hide_banner
      #     - -loglevel
      #     - panic
      #   hwaccel_args: []
      #   input_args:
      #     - -avoid_negative_ts
      #     - make_zero
      #     - -fflags
      #     - nobuffer
      #     - -flags
      #     - low_delay
      #     - -strict
      #     - experimental
      #     - -fflags
      #     - +genpts+discardcorrupt
      #     - -vsync
      #     - drop
      #     - -rtsp_transport
      #     - tcp
      #     - -stimeout
      #     - '5000000'
      #     - -use_wallclock_as_timestamps
      #     - '1'
      #   output_args:
      #     - -f
      #     - rawvideo
      #     - -pix_fmt
      #     - rgb24

      ####################
      # Global object configuration. Applies to all cameras
      # unless overridden at the camera levels.
      # Keys must be valid labels. By default, the model uses coco (https://dl.google.com/coral/canned_models/coco_labels.txt).
      # All labels from the model are reported over MQTT. These values are used to filter out false positives.
      # min_area (optional): minimum width*height of the bounding box for the detected person
      # max_area (optional): maximum width*height of the bounding box for the detected person
      # threshold (optional): The minimum decimal percentage (50% hit = 0.5) for the confidence from tensorflow
      ####################
      objects:
        track:
          - person
        filters:
          person:
            min_area: 5000
            max_area: 100000
            threshold: 0.5

      cameras:
        front_door:
          ffmpeg:
            ################
            # Source passed to ffmpeg after the -i parameter. Supports anything compatible with OpenCV and FFmpeg.
            # Environment variables that begin with 'FRIGATE_' may be referenced in {}
            ################
            input: rtsp://camera.url.goes.here
            #################
            # These values will override default values for just this camera
            #################
            # global_args: []
            # hwaccel_args: []
            # input_args: []
            # output_args: []
          
          ################
          ## Optionally specify the resolution of the video feed. Frigate will try to auto detect if not specified
          ################
          # height: 1024
          # width: 576

          ################
          ## Optional mask. Must be the same aspect ratio as your video feed.
          ## 
          ## The mask works by looking at the bottom center of the bounding box for the detected
          ## person in the image. If that pixel in the mask is a black pixel, it ignores it as a
          ## false positive. In my mask, the grass and driveway visible from my backdoor camera 
          ## are white. The garage doors, sky, and trees (anywhere it would be impossible for a 
          ## person to stand) are black.
          ## 
          ## Masked areas are also ignored for motion detection.
          ################
          # mask: back-mask.bmp

          ################
          # Allows you to limit the framerate within frigate for cameras that do not support
          # custom framerates. A value of 1 tells frigate to look at every frame, 2 every 2nd frame, 
          # 3 every 3rd frame, etc.
          ################
          take_frame: 1

          ################
          # The expected framerate for the camera. Frigate will try and ensure it maintains this framerate
          # by dropping frames as necessary. Setting this lower than the actual framerate will allow frigate
          # to process every frame at the expense of realtime processing.
          ################
          fps: 5

          ################
          # Configuration for the snapshots in the debug view and mqtt
          ################
          snapshots:
            show_timestamp: False

          # ################
          # # Camera level object config. This config is merged with the global config above.
          # ################
          # objects:
          #   track:
          #     - person
          #   filters:
          #     person:
          #       min_area: 5000
          #       max_area: 100000
          #       threshold: 0.5

# --------------------------------------------------------------------------------
# Deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frigate-deployment
  namespace: frigate
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frigate
  template:
    metadata:
      labels:
        app: frigate
    spec:
      containers:
      - name: frigate
        image: blakeblackshear/frigate:stable
        env:
        - name: TZ
          value: America/Toronto
        volumeMounts:
          - name: frigate-config
            mountPath: /config/config.yml
            subPath: config.yml
      volumes:
        - name: frigate-config
          configMap:
              name: frigate-config

# --------------------------------------------------------------------------------
# Service
---
apiVersion: v1
kind: Service
metadata:
  name: frigate-service
  namespace: frigate
spec:
  ports:
  - name: http
    targetPort: 5000
    port: 5000
  selector:
    app: frigate

# --------------------------------------------------------------------------------
# Ingress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frigate-ingress
  namespace: frigate
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/frontend-entry-points: http
spec:
  rules:
  - host: frigate.lab.local
    http:
      paths:
      - path: /
        backend:
          serviceName: frigate-service
          servicePort: http
1 Like

Did you manage to get your IOS notifications working?

HI,
Really like what you have done, but i am having a few issues I am trying to work out.
The first one is that the authorization on my amcrest doorbell cam fails most of the time. Frigate keeps trying and seems to eventually get through. The RTSP URL works fine every time in VLC. Running Stable docker version. config looks like

input: rtsp://admin:[email protected]:554

any thoughts on ways to make it more reliable?

Hi. First of all, thanks @blakeblackshear for great work. I run through this thread but it is so huge that I think my problem was mentioned and I missed it. I made proxmox machine for frigate to avoid running it on my HA host. The issue I ran is 100% of my CPU (which is old Xeon E5450) usage. I know that it’s old CPU but I don’t know why it use “only” 100% when I gave 4 cores to this VM. I set 2 fps, my inference_speed is about 600 (is it ok or not?). It is great when no object is detected - the camera is “live”. But when I go outside, CPU is getting 100% and camera view makes like 0.1 fps. And only one thread is used. I tested it with stress and there 400% of CPU was used. Is that problem with old CPU, virtualisation or can I improve something?