Local realtime person detection for RTSP cameras

Is there any way to use camera streams directly from Home Assistant? My cameras are on a separate network with a computer running Blue Iris, preventing me from configuring Frigate to directly access their RTSP streams. However, I am able access the Blue Iris streams as a Home Assistant camera component.

I have tried using a path of:

http://[USER NAME]:[PASSWORD]@[BLUE IRIS IP ADDRESS:PORT]/h264/[CAMERA SHORT NAME]/temp.m3u8

This path works for streaming in Home Assistant, but does not work in Frigate. Am I out of luck so long as Home Assistant and the cameras are on a separate network?

Is it possible to use the built-in camera motion detection instead of Frigates?
Iā€™m trying to figure out how to minimize cpu usageā€¦

Would be great if we could trigger a motion though mqtt ā€¦

No, Iā€™ve never seen an option like that.

It should work with the right ffmpeg parameters. The default parameters are expecting RTSP feeds.

I guys! I tried searching in the thread for a similar issue but I wasnā€™t able to find.
I have installed the custom integration (I run home assistant in docker) however, the Media Browser that is set up by Frigate does not show any picture or video.

(If I go through the filesystem, I can see there are files there).
I dont find any errors in the log or in the console.

Anyone faced this issue?

Some Chinese DVR systems implements this alarm server on default port 15002
Here you have an implementation to get mqtt binary sensors working in HA.

I wrote a simple Python program to connect to ONVIF and receive the motion events from the camera. This is working well with my cameras. But I found that even if I use this to enable/disable ā€œdetectā€ in Frigate, it does not lower the CPU usage. So some more extensive changes are needed with Frigate to make something like this work.

Thanks for the info, but the intention was to make Frigate skip its own motion detection and let the cameras ā€œsignalā€ to Frigate that motion is onā€¦

In other words, skip internal motion detection (hence save some cpu) and rely on cameras built in motion detection. When motion is detected itā€™s easy to start object detection by sending this to Frigate via MQTTā€¦

Interesting infoā€¦ So in other words, the CPU is taken up by camera streams?

I guess even when detect is not running, the ffmpeg process is still running. Also it seems like the motion detection is running as well. These processes are what take the CPU.

The biggest gain would be from configuring ffmpeg to not decode the video for motion detection. But this would need to be turned back on quickly to process the video for objects once motion is detected.

As far as I know, the ffmpeg stream is decoded in real time whether object detection is happening or not. It has to be because how else could Frigate look for motion without first decoding the incoming stream??

If detect and recording are turned off, why does it need ffmpeg running?

I cannot seem to get rotation working. I have attempted different solutions throughout the thread and nothing works.

Here is an example of what should work, but when I do the same, I get green boxes and errors in the log like:

frigate    | ffmpeg.BackGD.detect           ERROR   : Error opening input files: Invalid argument
frigate    | frigate.video                  INFO    : BackGD: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate    | frigate.video                  INFO    : BackGD: ffmpeg process is not running. exiting capture thread...
frigate    | ffmpeg.BackGD.detect           ERROR   : Option vf (set video filters) cannot be applied to input url rtsp://admin:[email protected]:554/user=admin_password=mxLXdLLw_channel=1_stream=0.sdp?real_stream -- you are trying to apply an in
put option to an output file or vice versa. Move this option before the file it belongs to.
frigate    | ffmpeg.BackGD.detect           ERROR   : Error parsing options for input file rtsp://admin:[email protected]:554/user=admin_password=mxLXdLLw_channel=1_stream=0.sdp?real_stream.
frigate    | ffmpeg.BackGD.detect           ERROR   : Error opening input files: Invalid argument
frigate    | frigate.video                  INFO    : BackGD: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate    | frigate.video                  INFO    : BackGD: ffmpeg process is not running. exiting capture thread...
frigate    | ffmpeg.BackGD.detect           ERROR   : Option vf (set video filters) cannot be applied to input url rtsp://admin:[email protected]:554/user=admin_password=mxLXdLLw_channel=1_stream=0.sdp?real_stream -- you are trying to apply an in
put option to an output file or vice versa. Move this option before the file it belongs to.
frigate    | ffmpeg.BackGD.detect           ERROR   : Error parsing options for input file rtsp://admin:[email protected]:554/user=admin_password=mxLXdLLw_channel=1_stream=0.sdp?real_stream.
frigate    | ffmpeg.BackGD.detect           ERROR   : Error opening input files: Invalid argument
frigate    | frigate.video                  INFO    : BackGD: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate    | frigate.video                  INFO    : BackGD: ffmpeg process is not running. exiting capture thread...

or

frigate | Error parsing config: expected a dictionary for dictionary value @ data['cameras']['BackGD']['ffmpeg']['output_args']

This is my config excerpt:

  BackGD:
    ffmpeg:
      output_args:
        - -vf
        - hflip
      inputs:
        - path: rtsp://admin:[email protected]:554/user=admin_password=mxLXdLLw_channel=1_stream=0.sdp?real_stream
          roles:
            - detect
            - clips
            - rtmp
    width: 1920
    height: 1080
    fps: 5

If those are turned off, then why run Frigate at all?

Iā€™m not sure you are understanding my posts, this is the second (third?) time you have replied with a negative comment.

The idea would be to turn on detection when the camera detects motion, but otherwise Frigate will just be waiting for a camera to trigger the object detection. If Frigate were not running at all there would be no way to start the object detection.

Now I imagine that ffmpeg could be left running but just buffering the video for a potential clip. ffmpeg would not need to be continuously decoding the stream.

What you are asking for doesnā€™t exist currently yet you keep asking if it exists.

Put in a feature request on GH.

This is one of the ways that DOODS and the deepstack integrations/addons work.
I have used both and used the integration for my camera to detect motion and then call the image processing service which has the detection run on the current frame in the camera.

1 Like

I have spent the day tweaking the input_args to no avail. Is there an explainer somewhere that I could use to fine tune my efforts? The documentation doesnā€™t go into detail about the input_args.

I seem to have a ā€œdead zoneā€ with my person detection. If anyone has some tips that could help I would appreciate it!
Camera: [IPC-T5442TM-AS-LED]

Here is a video

Here is the config for the camera

##################################
  Driveway:
    ffmpeg:
      inputs:
        - path: >-
            rtsp://USER:PASSWORD@IPADDRESS/cam/realmonitor?channel=1&subtype=1
          roles:
            - rtmp
        - path: >-
            rtsp://USER:PASSWORD@IPADDRESS:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - clips
            - detect            
#Driveway zone
    zones:
      CarsandYard:
        coordinates: 2401,1520,2372,1391,2272,858,2120,791,1991,732,1277,753,691,717,234,782,0,835,0,1520
    clips:
      enabled: true
      pre_capture: 10
      post_capture: 10
      objects:
        - person
        - car
    objects:
      track:
        - person
        - car
      filters:
        car:
          min_area: 5000
          max_area: 100000
          min_score: 0.35
          threshold: 0.74        
          mask: 
              - 802,623,609,920,555,1101,635,1249,851,1310,1119,1296,1296,837,1348,658,1303,463,988,444
              - 1594,463,1820,470,1844,513,1905,609,1999,1112,2006,1413,1834,1520,1331,1493,1228,1380,1425,466
              - 1738,0,1766,158,1543,165,1418,167,1413,0
              - 837,346,1402,282,1343,190,503,235,461,369
              - 2316,0,2648,278,2305,268,2039,38
              - 2563,58,2565,125,1784,118,1759,39
        person:
          min_area: 5000
          max_area: 100000
          min_score: 0.25
          threshold: 0.72     
          mask:
              - 908,362,927,501,1051,440,1051,350
#Driveway Detect Resolution
    width: 2688
    height: 1520
    fps: 7
#Driveway advanced motion settings
    motion:
       threshold: 16
       contour_area: 90
       delta_alpha: 0.25
       frame_alpha: 0.20
       frame_height: 300
       
#Driveway motion mask       
       mask:
        - 0,0,1030,0,1131,200,626,510,482,593,299,651,0,741
        - 2688,764,2589,800,2465,922,2352,1225,2326,1520,2688,1520
        - 2563,58,2565,125,1784,118,1759,39

Not sure about the time, it works fine on mine. Iā€™m in GMT and running on HassOS.

As for false positives, Iā€™ve reduced mine by playing around with the object max and min sizeā€™s. My partner didnā€™t take too well to being classified as 81% dog either :laughing: