Local realtime person detection for RTSP cameras

Thanks! I didn’t know about that needed to bea activated.

It didn’t help with the cctv-card (HACS), but it’s more of a card issue. To fix this the size of the preview image would need to be larger… You can see the problem in attached screenshot


(each black box has a camera view underneath and you can click on left side to show a big version in the right side. that side should be the stream)

Could someone explain me why the stream has more than 10 seconds delay with stream enabled …but fluent … Without I have no delay but it stutters…any way to get rid of the huge delay.it makes it impossible to use frigate as security cam with such a delay :pleading_face:

It’s complicated.

I see I see… that’s strage having this various behaviour. Without stream might be the best solution as it is instantly for me. Disadvantage. my streams in fullscreen are just ugly but for my purpose sufficient. I will revert my setup to WITHOUT stream component activated and live with a stream with less fps :frowning: hope one day someone will find a solution for this as frigtae offers a nice clean stream …

Some people are working on solutions:

https://community.home-assistant.io/t/realtime-camera-streaming-without-any-delay-rtsp2webrtc/

seems like here i another approach

1 Like

:woozy_face:

where are u from ? do u have an intel NUC ? My order was fulfilled within one week :slight_smile: which country are u from?

Netherlands. Yes I have a NUCi5 with Proxmox.

From my understanding, the objects were trained on 300x300 images.
The sub-steam of my camera is set to 640x480.

What would i need to do to ensure the camera is working with the recommended 300x300 size image? Would setting a 300x300 zone or object mask be the right way of doing this?

Got my coral pci today installed it on my win10 but when I change the config to the coral detector it doesn’t work, docker keeps dying.

It only works when I use my cpu…

Are there any logs from Frigate we can see?
Have you changed the “detectors:”?
This is my setup…

detectors:
  coral:
    type: edgetpu
    device: 'usb:0'

I changed it to:

detectors:
  coral_pci:
    type: edgetpu
    device: pci

This is what docker shows:

`During handling of the above exception, another exception occurred:`

`frigate |`

`frigate | Traceback (most recent call last):`

`frigate | File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap`

`frigate | self.run()`

`frigate | File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run`

`frigate | self._target(*self._args, **self._kwargs)`

`frigate | File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector`

`frigate | object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)`

`frigate | File "/opt/frigate/frigate/edgetpu.py", line 63, in __init__`

`frigate | edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)`

`frigate | File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate`

`frigate | raise ValueError('Failed to load delegate from {}\n{}'.format(`

`frigate | ValueError: Failed to load delegate from libedgetpu.so.1.0`


This is not a person :slight_smile: how can i tell frigate that?

Hi everyone,
i am really getting crazy with frigate config.
i have an intel nuc 10i5 with 8gb ram and proxmox.
2 dahua onvif poe cameras that have two streams

Can someone share their config please?
Thank you!!

You don’t need to do anything, it will simply resize it to be the right size. But the ideal scenario is if the smallest thing you plan to look for is already around 300x300 on your camera, then the detector should get all the resolution it needs to do a good job. Not to say it wont work with less, it just works better with more. 1080p is the sweet spot for me.

It will be very hard to get this working on windows as docker on windows will run this in a VM, and Coral doesn’t like VM’s. I tried with esxi and gave up.

Change the threshold to 77%

Thank you kind sir, lol didn’t realise.
But obviously i thout i can tell the tensorflow model that man, this is not a person.
I hope that some robbers will not camoflage as a babystrolley with my 0.77 threshold to go pass my PENTAGON security cameras :smiley:

1 Like

Proxmox with 10 dahua rtsp cameras onvif
Most important thing, you need to passthrough the GPU with PCI passthrough in proxmox !
And then you will see lower CPU usage.
I have i7 10th gen, with 10 cameras, with GPU passthrough and its fine, taking 2cores 100%

detectors:
  coral:
    type: cpu
cameras:
  # Required: name of the camera
  CAM0:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        # Required: the path to the stream
        # NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}
        - path: rtsp://topsecretlogin:topsecretpassword@mylocalip:554/cam/realmonitor?channel=1&subtype=0&authbasic=pleasedonthackme
          # Required: list of roles for this stream. valid values are: detect,record,clips,rtmp
          # NOTICE: In addition to assigning the record, clips, and rtmp roles,
          # they must also be enabled in the camera config.
          roles:
            - detect
            - clips


    # Required: width of the frame for the input with the detect role
    width: 1920
    # Required: height of the frame for the input with the detect role
    height: 1080
    # Optional: desired fps for your camera for the input with the detect role
    # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
    #       Frigate will attempt to autodetect if not specified.
    fps: 5



    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: zones for this camera


    # Optional: Camera level detect settings
    zones:
      zone_0:
        coordinates: 0,1080,1920,1080,1920,0,0,0
    detect:
      # Optional: enables detection for the camera (default: True)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: True
      # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
      max_disappeared: 25

    # Optional: save clips configuration
    clips:

      # Required: enables clips for the camera (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: False
      # Optional: Number of seconds before the event to include in the clips (default: shown below)
      pre_capture: 5
      # Optional: Number of seconds after the event to include in the clips (default: shown below)
      post_capture: 5
      # Optional: Objects to save clips for. (default: all tracked objects)
      objects:
        - person
      # Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
      #required_zones: []
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 1
        # Optional: Per object retention days
        #objects:
          #person: 15

    # Optional: 24/7 recording configuration
    record:
      # Optional: Enable recording (default: global setting)
      enabled: False
      # Optional: Number of days to retain (default: global setting)
      retain_days: 30

    # Optional: RTMP re-stream configuration
    rtmp:
      # Required: Enable the live stream (default: True)
      enabled: False

    # Optional: Configuration for the jpg snapshots written to the clips directory for each event
    snapshots:
      # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: False
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: False
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: False
      # Optional: height to resize the snapshot to (default: original size)
      #height: 175
      # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
      #required_zones: []
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 1
        # Optional: Per object retention days
        #objects:
          #person: 15

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: False
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: True
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 270
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      #required_zones: []

    # Optional: Camera level object filters config.
    objects:
      track:
        - person
        - car
        - cat
      # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object. 
      # NOTE: This mask is COMBINED with the object type specific mask below
      # mask: 0,0,1000,0,1000,200,0,200
      filters:
        person:
          min_area: 1000
          max_area: 100000
          min_score: 0.5
          threshold: 0.7
          # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
          # Checks based on the bottom center of the bounding box of the object
          # mask: 0,0,1000,0,1000,200,0,200