Local realtime person detection for RTSP cameras

@Jon123 xD ahahaha :smiley:

Did you ever create a model for trash can detection? I was thinking about trying the same thing, but I have a lot to learn before I get that working.

Hey All,
I’m using 2 streams for frigate. 1 to detect (low quality) and the other to record (high quality). When getting a snapshot, it’s always take a snapshot using the low quality stream. Is there a way to get the high quality snapshot? Using them for face recognition…

@blakeblackshear There is a new Jetson called Orin with support of 22x 1080p30 (H.265) video decode. https://www.waveshare.com/jetson-agx-orin-developer-kit.htm It isnt cheap but maybe the best solution for large number of cameras and ARM based low power consumption. Will it work with Frigate in a way that capable of utilizing the hardware acceleration of Orin?

Maybe someone is able to point me in the right direction.

Just started with frigate and currently trying to get going with 0.11.0 beta (full access).
Got a Reolink Duo in the garden and added both cameras to my frigate instance on a Home Assistant OS, Rpi 4, Google Coral USB setup.

Everything is working well, except for HW acceleration and without HW acceleration the cpu usage of frigate is at 90%.

Log is telling, that frigate cannot allocate the neccessary gpu memory. I added gpu_mem=256 to my config.txt on the boot partition. Also tried 512mb without success.

[2022-08-01 11:45:05] ffmpeg.reolinkduo2.detect      ERROR   : Error while decoding stream #0:0: Cannot allocate memory

Here is my current fresh frigate.yml:

mqtt:
  host: 192.168.1.21
  user: XXXX
  password: XXXX
  
detectors:
  coral:
    type: edgetpu
    device: usb
  
cameras:
  reolinkduo1:
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]:554/h264Preview_01_main
          roles:
            - detect
            - rtmp
      hwaccel_args: -c:v h264_v4l2m2m
    detect:
      width: 640
      height: 360
      fps: 7
    snapshots:
      enabled: True
      
      
  reolinkduo2:
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]:554/h264Preview_02_main
          roles:
            - detect
            - rtmp
      hwaccel_args: -c:v h264_v4l2m2m
    detect:
      width: 640
      height: 360
      fps: 7
    snapshots:
      enabled: True
      
objects:
  track:
    - person
    - dog
    - cat
  filters:
    person:
      threshold: 0.75

Also tried h264_mmal, because I am running a 32-bit instance on my Rpi 4 but unfortunately without success. Then the log is displaying another error, that h264_mmal cannot be found. I think it is not part in the latest beta of frigate / latest version of ffmpeg.

Thank you in advance, maybe someone got the right idea.

Edit: Full Access is also granted

I I also have problem with high cpu usage. As I found out on the net permanent solution for this problem will be using coral usb accelerator but this is pretty expensive.

I’m using a Google Coral usb accelerator, which frigate is successfully using for detection. I think my high cpu usage is coming from decoding the streams without hardware acceleration.

I got robbed last night. I have downloaded the relevant clips but what is the best way to get a slightly longer recording from the 24 hours of continuous recording i have setup?

Just looking for guidance on how best to download it/save it from being deleted.

Just helped someone with this the other day: https://www.reddit.com/r/homeassistant/comments/wbridl/frigate_help/ii8xvzg/

2 Likes

I’m receiving error messages in the Frigate logs when staring up. This message only appears when I add in objects track. If I remove the objects to track everything seems to work fine. I will have the default person tracking. I anted to add in differnt objects since I would like to tacks cars, bikes, and other animals with my outdoor camera. I have posed my Frigate config bellow.
please see example:

mqtt:
  host: xxx.xxx.xxx.xxx
  user: xxxxxxxx
  password: xxxxxxxxxx
  
cameras:
        
  living_room_camera:
    ffmpeg:
      inputs:
        - path: rtsp://xxxxx:[email protected]:554/h264Preview_01_main
          roles:
            - detect
    detect:
        width: 2560
        height: 1920
        fps: 5
        objects:
          track:
            - person
            - dog
            - cat
snapshots:
  enabled: True
  clean_copy: True
  timestamp: True
  height: 175
  retain:
     default: 10
    
            
    
    
detectors:
  cpu1:
    type: cpu
  cpu2:
    type: cpu
*************************************************************
***    Your config file is not valid!                     ***
***    Please check the docs at                           ***
***    https://docs.frigate.video/configuration/index     ***
*************************************************************
*************************************************************
***    Config Validation Errors                           ***
*************************************************************
1 validation error for FrigateConfig
cameras -> living_room_camera -> detect -> objects
  extra fields not permitted (type=value_error.extra)
Traceback (most recent call last):
  File "/opt/frigate/frigate/app.py", line 312, in start
    self.init_config()
  File "/opt/frigate/frigate/app.py", line 77, in init_config
    user_config = FrigateConfig.parse_file(config_file)
  File "/opt/frigate/frigate/config.py", line 904, in parse_file
    return cls.parse_obj(config)
  File "pydantic/main.py", line 511, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for FrigateConfig
cameras -> living_room_camera -> detect -> objects
  extra fields not permitted (type=value_error.extra)
*************************************************************
***    End Config Validation Errors                       ***
*************************************************************

Remove “track:”

I was able to figure it out. Thanks. Now trying to put together an automation that will send me the clip or snapshot of the motion detected for either camera. Found a blueprint but can’t get it to fire off the automation.

i use the hunterjm blueprint. works well. I also modified it so I only get notifications if my door is locked, because if it’s unlocked it’s generally me.

I was robbed recently and have been monitoring frigate extremely closely since then with the intent of adding more cams. I already added one more and it is working well overall.
I am noticing several different issues that I’d love some advice on.
For each issue i will specify which brand of camera I am seeing the issue on. I have a reolink 520 and a wyzecam v3.

  1. Some of my hour long segments are just a black screen but do include audio (reolink)
  2. Some of my clips are also black screens with audio (reolink)
  3. Some of my clips and recordings do not play (reolink)
  4. Some of my recordings have time jumps (hour long recording is only 40 or 50 minutes long for example) Duration and cam time is synced until x time and then the cam time and image jumps 30s ahead of the playback duration and this must happen many times to lose 10-20minutes. (reolink and wyzecam)
  5. When in the recordings page for a camera, clicking on a clip in order to jump to that timeline is jumping to the wrong time. an event might have occurred at 12:30:45 and clicking the snapshot jumps the recording to ~30:35 whcih at a glance seems expected. However, presumably due to earlier time jumps (issue 4) the event might already have ended or be halfway through as the camera/clock time is later so it should have jumped to earlier (the clip itself is correct from the events page)

For clips with problem 2, I can actually download them and they play with video.

There does not appear to be a discernible pattern which is difficult to troubleshoot.

1 Like

I tired that but for some reason the blueprint will never trigger. I figured out hot to get a basic trigger with the MQTT events. I can get the notification with the name of the Tigger and camera but the snapshot image never comes thru. What would the correct URL be for the image to be sent to Iphone?

You can see the required urls within the blueprint if you open it in a text editor

e.g "{{base_url}}/api/frigate/notifications/{{id}}/{{camera}}/clip.mp4"

Still not able to get the images to be sent over. I’m wondering if Frigate is setup correctly but everything else seems to be working just fine.
this is the script that I have to send the notification.
I have tried different variations of the URL as well during testing but neither are working for me.

data:
  message: >-
    A {{payload["after"]["label"]}} was detected in the
    {{payload['after']['camera']}} at {{now().strftime('%H:%M:%S')}}.
  data:
    image: >
      {{base_url}}/api/frigate/notifications/{{payload["after"]["id"]}}/snapshot.jpg

data:
  message: >-
    A {{payload["after"]["label"]}} was detected in the
    {{payload['after']['camera']}} at {{now().strftime('%H:%M:%S')}}.
  data:
    image: >
      http://ccab4aaf-frigate-beta:5000/api/frigate/notifications/{{payload["after"]["id"]}}/snapshot.jpg
data:
  message: >-
    A {{payload["after"]["label"]}} was detected in the
    {{payload['after']['camera']}} at {{now().strftime('%H:%M:%S')}}.
  data:
    image: https://*NABUCASAURL*.ui.nabu.casa/api/frigate/notifications/{{payload["after"]["id"]}}/snapshot.jpg
      

I am getting to much event detections. All are correct but I prefer to get single notification. what is the right approach to reduce detection events from a camera? should I increase max disappear?

  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25

I think I found the issue. There is a type in the event, and I can get same detection with different type (new, update, end).

Do you have the frigate integration installed?

You need both the add-on and the integration (from hacs)

Yes I have the Frigate addon and the integration.
I will be playing with a different automation that may be more lengthy. Since I have 3 cameras and would like to get notification with not only motion but what caused the motion person,car, cat,dog,bear. I was going to use the camera entity that the integration made to take a snapshot of the motion captured from one of the cameras. Camera.driveway.person. Then have the snap shot saved to the www folder and send that image to my phone. This is the only way I can get ab image sent over. If anyone has any other ideas I’d love to hear since this would be many automations and kinda clunky.