Local realtime person detection for RTSP cameras

The new version is cool, but somewhat I have the feeling that the previous one was more… bulletproof in catching events. Maybe it’s because, if I understood correctly, new version has 24/7 recording always on, while on the old one it was off for me and I just recorded on events.
Anyway… I have this problem on just 1 of my 3 cams:

I can delete db and recorded clips and start over, not a problem, but I was wondering what is causing the problem…

Add record role to each camera :slight_smile:

I really don’t know.
nevertheless now its working fine.
Write here if you have problems further

It doesn’t have anything to do with recording. The previous implementation of clips was recording 24/7 too. It just threw away segments when there weren’t any events.


I have same problem i can’t turn on clips and snapshots i toggle him but no results

# Optional: Detectors configuration. Defaults to a single CPU detector

# Optional: Database configuration
database:
  # The path to store the SQLite DB (default: shown below)
  path: /media/frigate/frigate.db

# Optional: model modifications
model:
  # Optional: path to the model (default: automatic based on detector)
  path: /edgetpu_model.tflite
  # Optional: path to the labelmap (default: shown below)
  labelmap_path: /labelmap.txt
  # Required: Object detection model input width (default: shown below)
  width: 320
  # Required: Object detection model input height (default: shown below)
  height: 320
  # Optional: Label name modifications. These are merged into the standard labelmap.
  labelmap:
    2: vehicle

# Optional: logger verbosity settings
logger:
  # Optional: Default log verbosity (default: shown below)
  default: info
  # Optional: Component specific logger overrides
  logs:
    frigate.event: debug

# Optional: set environment variables
environment_vars:
  EXAMPLE_VAR: value

# Optional: birdseye configuration
birdseye:
  # Optional: Enable birdseye view (default: shown below)
  enabled: True
  # Optional: Width of the output resolution (default: shown below)
  width: 1280
  # Optional: Height of the output resolution (default: shown below)
  height: 720
  # Optional: Encoding quality of the mpeg1 feed (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 4
  # Optional: Mode of the view. Available options are: objects, motion, and continuous
  #   objects - cameras are included if they have had a tracked object within the last 30 seconds
  #   motion - cameras are included if motion was detected in the last 30 seconds
  #   continuous - all cameras are included always
  mode: objects

# Optional: ffmpeg configuration

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  width: 1280
  # Optional: height of the frame for the input with the detect role (default: shown below)
  height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 5
  # Optional: enables detection for the camera (default: True)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25

# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - cat
    - vehicle 
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
  filters:
    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
      max_area: 100000
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
      mask: 0,0,1000,0,1000,200,0,200

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 25
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: ~0.17% of the motion frame area)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller
  # moving objects.
  contour_area: 100
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
  # Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
  # Too low and a fast moving person wont be detected as motion.
  delta_alpha: 0.2
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.2
  # Optional: Height of the resized motion frame  (default: 1/6th of the original frame height, but no less than 180)
  # This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage.
  # Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 180
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
  mask: 0,900,1080,900,1080,1920,0,1920

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  enabled: True
  # Optional: Number of days to retain recordings regardless of events (default: shown below)
  # NOTE: This should be set to 0 and retention should be defined in events section below
  #       if you only want to retain recordings of events.
  retain_days: 10
  # Optional: Event recording settings
  events:
    # Optional: Maximum length of time to retain video during long events. (default: shown below)
    # NOTE: If an object is being tracked for longer than this amount of time, the retained recordings
    #       will be the last x seconds of the event unless retain_days under record is > 0.
    max_seconds: 300
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 5
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 5
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
      - dog
      - cat
      - car
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 10
      # Optional: Per object retention days
      objects:
        person: 15

# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# NOTE: Can be overridden at the camera level
snapshots:
  # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: print a timestamp on the snapshots (default: shown below)
  timestamp: False
  # Optional: draw bounding box on the snapshots (default: shown below)
  bounding_box: False
  # Optional: crop the snapshot (default: shown below)
  crop: False
  # Optional: height to resize the snapshot to (default: original size)
  height: 175
  # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
  required_zones: []
  # Optional: Camera override for retention settings (default: global values)
  retain:
    # Required: Default retention days (default: shown below)
    default: 10
    # Optional: Per object retention days
    objects:
      person: 15

# Optional: RTMP configuration
# NOTE: Can be overridden at the camera level
rtmp:
  # Optional: Enable the RTMP stream (default: True)
  enabled: True

# Optional: Live stream configuration for WebUI
# NOTE: Can be overridden at the camera level
live:
  # Optional: Set the height of the live stream. (default: 720)
  # This must be less than or equal to the height of the detect stream. Lower resolutions
  # reduce bandwidth required for viewing the live stream. Width is computed to match known aspect ratio.
  height: 720
  # Optional: Set the encode quality of the live stream (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 5

# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
  # Optional: Position of the timestamp (default: shown below)
  #           "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
  position: "tl"
  # Optional: Format specifier conform to the Python package "datetime" (default: shown below)
  #           Additional Examples:
  #             german: "%d.%m.%Y %H:%M:%S"
  format: "%m/%d/%Y %H:%M:%S"
  # Optional: Color of font
  color:
    # All Required when color is specified (default: shown below)
    red: 255
    green: 255
    blue: 255
  # Optional: Line thickness of font (default: shown below)
  thickness: 2
  # Optional: Effect of lettering (default: shown below)
  #           None (No effect),
  #           "solid" (solid background in inverse color of font)
  #           "shadow" (shadow for font)
  effect: None

# Required
cameras:
  # Required: name of the camera
  back:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        # Required: the path to the stream
        # NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}
        - path: rtsp://admin:[email protected]:80/ch0_0.264
          # Required: list of roles for this stream. valid values are: detect,record,rtmp
          # NOTICE: In addition to assigning the record, and rtmp roles,
          # they must also be enabled in the camera config.
          roles:
            - detect
            - rtmp
            - record
            - clips
          # Optional: stream specific global args (default: inherit)
          # global_args:
          # Optional: stream specific hwaccel args (default: inherit)
          # hwaccel_args:
          # Optional: stream specific input args (default: inherit)
          # input_args:
      # Optional: camera specific global args (default: inherit)
      # global_args:
      # Optional: camera specific hwaccel args (default: inherit)
      # hwaccel_args:
      # Optional: camera specific input args (default: inherit)
      # input_args:
      # Optional: camera specific output args (default: inherit)
      # output_args:

    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: zones for this camera
    zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      front_steps:
        # Required: List of x,y coordinates to define the polygon of the zone.
        # NOTE: Coordinates can be generated at https://www.image-map.net/
        coordinates: 545,1077,747,939,788,805
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        objects:
          - person
          - dog
          - cat
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.
        filters:
          person:
            min_area: 5000
            max_area: 100000
            threshold: 0.7

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: True
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 270
      # Optional: jpeg encode quality (default: shown below)
      quality: 70
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      required_zones: []

Hi guys, just updated to v0.9.1 and migrated my homeassistant to t530 thin client from pi and I’m getting this error with ffmpeg
Nevermind, just hw acceleration config issue

I’ve been meaning to check Frigate out for a while but hadn’t got around to it. I saw the recent Linus Tech Tips video (https://youtu.be/B635wcdr6-w) and decided to give it a go.

Wow! This is so impressive I’ve been using a couple of other image processing integrations in HA but Frigate deals nicely with a couple of shortcomings I felt that they had, as good as they were.

One question though, is there any way of forcing detection to occur or can this only be triggered by motion?

My favourite use case for this type of integration is to keep my patio lights on when I’m sat outside. I’d like to be able to perhaps start a timer when the number of ‘person objects’ goes above zero, then when the timer has expired, if the number pf person objects has dropped to zero because of a lack of motion I’d like to force detection. If it remains at zero, switch off lights otherwise, restart the timer.

Thanks in advance.

Is it no longer possible to record clips from a camera’s main stream while also recording 24/7 and detecting from it’s sub-stream after the latest update?

Thanks. I was sure I was missing something, I just couldn’t figure out where I was missing it from.

1 Like

Is there any config option to make Birdseye the default view rather than Cameras when opening up Frigate?

I am loving having Birdseye up in the corner of a secondary display when working at home. My eyes only get drawn to it when a tracked object is detected.

Motion detection is always happening, so I don’t think you need to force detection. You just need to make something that resets your timer every time motion IS detected. There is an easy way to do this, I unfortunately don’t have an example but I know it can be done for all kinds of automations.

Go to it directly?

http://[ip]:5000/birdseye

1 Like

Coming back in a future update

I have somethink similar, with perimetr person detection.Working perfectly.
If person is detected it sounds the alarm. I am doing it with nodered and mqtt.

If you use local without https then you can make a menu option with xframe and it will forward directly to
http://[ip]:5000/birdseye
as mentioned from @decoy654

That is a good idea. I frontend with NGINX but local http will work for internal.

Why i say local http. Because https didn’t work for frigate xframe. Lost few hours to figuring that out :smiley:

1 Like

You can use newest version of Frigate Lovelace Card from HACS to view Birdseye and add it to your Lovelace anywhere.

1 Like

Hi, thanks for the reply. I understand that motion detection is always happening but actual motion isn’t necessarily always happening in the area that the camera is looking at. I should have asked if there is way to force object detection.

If I don’t move for a few seconds the number of objects detected will return to zero even though I’m still there, if this happens at the end of the timing cycle the lights will go out. If I could force object detection just before my lights are about to switch off that would solve the problem.

I guess for now an improvement to my automation would be to start the timer when the number of detected objects returns to zero rather than when it goes above zero.

Okay, good to hear.