Local realtime person detection for RTSP cameras

Thanks. I will look into it.

I’m in this camp as well. Prior to Blake’s cobbling this incredible platform together, NVR / object detection for me involved:

  • BlueIris for recording + motion detection + mjpeg streaming
  • Rob’s Rekognition component for object detection
  • NodeRed for BI’s detection alerts, calling Rekognition, and subsequent interpretation/response

I think the thing holding me back from deleting that BI VM is Frigate’s 60s recordings limitation. I’m guessing 60s is partly to limit cache size, and yeah typically when you want to go and look back at something, you probably have some idea as to approximately when the target event took place. But it seems difficult to navigate large spans of time across so many individual files. Cobbling the .mp4s together outside of Frigate and presenting them elsewhere is an option I might trend towards, but for me at least it would be nice if the Frigate-based recordings structure (and subsequent presentation via HA) more followed out-of-the-box NVR softwares

The recordings are in 60s segments to prevent data loss. If/when ffmpeg loses connection to your camera, the process exits and the resulting file is often corrupted. Using 60s segments prevents more than a minute of data loss. There are a few possible solutions here. There may be a way to ensure ffmpeg exits cleanly without losing the rest of the file, but I have never bothered to look. I could also look at appending these 60s segments to a single file or merging them on some interval. Perhaps the best solution would be to create the mp4 dynamically on the fly and chain the segments together. That way its just a continuous video stream and you can seek wherever you want. I may end up building a custom panel for homeassistant (not using ingress) that has a much richer interface for viewing historical footage.

To be honest though, in the several years I have been using ffmpeg to save 24/7 video as 60s segments, I can count on one hand the number of times I have gone back and reviewed the footage. I feel like that time is better spent improving the realtime analysis and other features.

1 Like

Just wanted to thank Blake for his amazing work on this. Prior to using Frigate, I was recording from 3 cameras on motion detection, which was next to useless due to them being triggered by clouds/ moving trees etc. Now every clip is relevant and so much more useful! Really appreciate the hard work you have put in.

3 Likes

I’m trying to enable HW acceleration for my 10th Gen processor, but do not see any CPU usage drop. Also intel-gpu-top shows nothing.
I have the same config:

ffmpeg:
  hwaccel_args:
    - -hwaccel
    - qsv
    - -qsv_device
    - /dev/dri/renderD128

@davidvf, which OS do you have?

Good catch, i didn’t know about the command intel_gpu_top. But indeed, for me it does not show anything either.
I am running:

# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.10 (Groovy Gorilla)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.10"
VERSION_ID="20.10"

EDIT:
It looks like if i change the command to the below i start seeing movement in the intel_gpu_top

ffmpeg -hwaccel qsv -qsv_device /dev/dri/renderD128 -c:v h264_qsv -i rtsp://192.168.0.5:7447/16gU54OAVReFohX3 -c:v h264_qsv hwaccel-test.mp4

I will try to get that in my frigate config.yml tonight…

So basically i have added in the following to the input_args and the output_args
-c:v h264_qsv

Hi…
Using rc-3 and it works great, but… I seem to have a issue with MASKs. …
I’ve made a mask that covers areas where I don’t want any kind of detection from.


However, when I check the events I see a detections all the time…
image

This is my config file:

 mot_parkering:
    ffmpeg:
      inputs:
        - path: >-
            rtmp://192.168.1.110/bcs/channel0_main.bcs?channel=0&stream=0&user=XXX
          roles:
            - clips
            - rtmp
        - path: >-
            rtmp://192.168.1.110/bcs/channel0_sub.bcs?channel=0&stream=0&user=XXX
          roles:
            - detect
    width: 640
    height: 480
    fps: 5
    detect:
      enabled: true
      max_disappeared: 25
    clips:
      enabled: true
    motion:
      mask:
        - '640,0,640,260,545,215,466,186,511,97,353,59,102,119,0,158,0,0'

Have I missed something?

You have to define the mask also under filter and for each object. See post here.

1 Like

There are two types of masks available:

  • Motion masks: Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with Motion Boxes enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with Motion Boxes enabled again.

  • Object filter masks: Object filter masks are used to filter out false positives for a given object type. These should be used to filter any areas where it is not possible for an object of that type to be. The bottom center of the detected object’s bounding box is evaluated against the mask. If it is in a masked area, it is assumed to be a false positive. For example, you may want to mask out rooftops, walls, the sky, treetops for people. For cars, masking locations other than the street or your driveway will tell frigate that anything in your yard is a false positive.

These are different because the technical implementation of these masks is completely different. One blacks out the image for motion detection and the other is used to evaluate whether or not a given point is within the polygon.

2 Likes

Boom! https://blakeblackshear.github.io/frigate/

Thanks @paulstronaut

6 Likes

@ebendl I also have this small live streams in HA. As the HA stream componet is not an alternative for me. Needs ~ 5 sec to start the live stream in lovelace which is not acceptable.

I found a workaround for this. You need to edit the custom_components/frigate/camera.py line 51:

from

self._latest_url = urllib.parse.urljoin(self._host, f"/api/{self._name}/latest.jpg?h=277")

to:

self._latest_url = urllib.parse.urljoin(self._host, f"/api/{self._name}/latest.jpg")

Then you have the original rtmp resolution in HA if you would view the stream from frigate.

Does it work for you?
I tried this, it is working with manual run. But in case of frigate, if I add -c:v h264_qsv to the input, I see following errors

ffmpeg.garage.detect           ERROR   : Input #0, rtsp, from 'rtsp://192.168.0.129:554/cam/ch0_0.h264':
ffmpeg.garage.detect           ERROR   :   Metadata:
ffmpeg.garage.detect           ERROR   :     title           : Session streamed by "ISD RTSP Server"
ffmpeg.garage.detect           ERROR   :   Duration: N/A, start: 1611327783.865822, bitrate: N/A
ffmpeg.garage.detect           ERROR   :     Stream #0:0: Video: h264, yuvj420p(pc, bt709, progressive), 2560x1440 [SAR 1:1 DAR 16:9], 10 fps, 9.67 tbr, 90k tbn, 20 tbc
ffmpeg.garage.detect           ERROR   :     Stream #0:1: Audio: aac, 44100 Hz, mono, fltp
ffmpeg.garage.detect           ERROR   : Stream mapping:
ffmpeg.garage.detect           ERROR   :   Stream #0:0 -> #0:0 (h264 (h264_qsv) -> rawvideo (native))
ffmpeg.garage.detect           ERROR   : Press [q] to stop, [?] for help
ffmpeg.garage.detect           ERROR   : Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
ffmpeg.garage.detect           ERROR   : Error reinitializing filters!
ffmpeg.garage.detect           ERROR   : Failed to inject frame into filter network: Function not implemented
ffmpeg.garage.detect           ERROR   : Error while processing the decoded data for stream #0:0
ffmpeg.garage.detect           ERROR   : Conversion failed!

Then I added -c:v h264_qsv to the output

ffmpeg.garage.detect           ERROR   : Input #0, rtsp, from 'rtsp://192.168.0.129:554/cam/ch0_0.h264':
ffmpeg.garage.detect           ERROR   :   Metadata:
ffmpeg.garage.detect           ERROR   :     title           : Session streamed by "ISD RTSP Server"
ffmpeg.garage.detect           ERROR   :   Duration: N/A, start: 1611328556.271633, bitrate: N/A
ffmpeg.garage.detect           ERROR   :     Stream #0:0: Video: h264, yuvj420p(pc, bt709, progressive), 2560x1440 [SAR 1:1 DAR 16:9], 10 fps, 10 tbr, 90k tbn, 20 tbc
ffmpeg.garage.detect           ERROR   :     Stream #0:1: Audio: aac, 44100 Hz, mono, fltp
ffmpeg.garage.detect           ERROR   : Stream mapping:
ffmpeg.garage.detect           ERROR   :   Stream #0:0 -> #0:0 (h264 (h264_qsv) -> h264 (h264_qsv))
ffmpeg.garage.detect           ERROR   : Press [q] to stop, [?] for help
ffmpeg.garage.detect           ERROR   : Incompatible pixel format 'yuv420p' for codec 'h264_qsv', auto-selecting format 'nv12'
ffmpeg.garage.detect           ERROR   : Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
ffmpeg.garage.detect           ERROR   : Error reinitializing filters!
ffmpeg.garage.detect           ERROR   : Failed to inject frame into filter network: Function not implemented
ffmpeg.garage.detect           ERROR   : Error while processing the decoded data for stream #0:0
ffmpeg.garage.detect           ERROR   : Conversion failed!

It looks like there are some problems because of changing pixel format. We probably need to add -vf hwdownload,format=nv12 or something like this (see https://trac.ffmpeg.org/wiki/Hardware/QuickSync, Full Examples).
Also I found similar issue for hevc (h265) codec, but not sure how to select proper parameters for my case. https://trac.ffmpeg.org/ticket/7691

My current ffmpeg config

ffmpeg:
  global_args: -hide_banner -loglevel info
  hwaccel_args: -hwaccel qsv -qsv_device /dev/dri/renderD128
  input_args: -c:v h264_qsv -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
  output_args: 
    detect: -c:v h264_qsv -f rawvideo -pix_fmt yuv420p

Hi @ekos2001

I have different issue, have a look at the github issue i have opened

My issue is probably different because in my case i have cut off the detect portion.
I am currently focusing on just creating a clip.

Awesome! I’m glad the docs worked. I’ll PM you about getting search set up, if you haven’t looked into it already

@blakeblackshear
Can you please share the correct hwaccel arguments for an AtomicPi? I can’t find out what generation it is.
Is there also a way to verify that the correct driver is in use and that the GPU is actually used? I think it should work with intel_gpu_top?

Right now, I am using the following arguments, but I have mixed luck… The AtomicPI sometimes just hangs. Not sure if it is related, but I want to make sure my frigate setup is correct.

hwaccel_args:
    - -hwaccel
    - qsv
    - -hwaccel_device
    - /dev/dri/renderD128

Thanks

Is it possible to run the docker on another machine and still use hass-integration?

Yes, that’s how I am doing it.

Run the docker wherever you want, use hacs to integrate frigate into HASS

Used to run it localy on my NUC but, with no coral it uses about 80% of my CPU.
I’ve installed FRIGATE on my old laptop i7 but the hass-integration is not able to find the frigate anymore. Does this mean that I have to create all of the sensors and cameras manually?

Edit: typo…

You’ll need to re-configure the integration in HA to point to the IP of your other machine

Allright… Now let’s find where… :slight_smile:
All help is appreciated.