Local realtime person detection for RTSP cameras

@germplan I think you have the height and width reversed in your configuration. Also make sure it matches the actual resolution of your rtsp feed.

1 Like

Derp, simple mistake. That fixed it!

Should fix the sample config, but I see why itā€™s like this. The sample image is also portrait mode.

    # height: 1280
    # width: 720

@blakeblackshear
i guess i have setup my config wrong some how because it seems that it does not use the mask entry in my config.

  entre:
    ffmpeg:
      input: rtsp://192.168.xxx.xxx:554/11
    take_frame: 4
    snapshots:
      show_timestamp: True
    mask: entre-mask.bmp
    objects:
      track:
        - person
        - car
      filters:
        person:
          min_area: 5000
          max_area: 100000
          threshold: 0.5

how can i fix that?

oh no!!
Did forget to paint the area blackā€¦ :flushed:
Did not work anyway, any tip?
All is ok now.

I continue to get the following a few days after restarting Frigate.
I donā€™t see anything in the changelogs for 0.5.2 but Iā€™ll upgrade an keep an eye on it.

Any idea whatā€™s causing this?

2020-07-31T18:40:06.284485867Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 52 more times
2020-07-31T18:40:06.471209733Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 51 more times
2020-07-31T18:40:06.484978186Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 50 more times
2020-07-31T18:40:06.585596072Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 49 more times
2020-07-31T18:40:06.686453890Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 48 more times
2020-07-31T18:40:06.788592241Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 47 more times
2020-07-31T18:40:06.905000160Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 46 more times
2020-07-31T18:40:07.005189295Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 45 more times
2020-07-31T18:40:07.105407742Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 44 more times
2020-07-31T18:40:07.205702340Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 43 more times
2020-07-31T18:40:07.305787669Z /arrow/cpp/src/plasma/io.cc:168: Connection to IPC socket failed for pathname /tmp/plasma, retrying 42 more times

Iā€™m seeing a single core maxed out. I even attempted multiple containers on different servers with decent xeon processors.

How can I run the older CPU version? I may think about getting a Coral stick, but would like to try this first.

@germplan: Beeing far from an expert, couple of thoughts:

  • How many fps does your camera produce? You specified ā€˜take_frame 1ā€™ in your config, which try detection multiple times per second. Maybe you could try to skip some frames.

  • When you say your CPU maxed out - is that by ffmpeg or detection? With the coral device (my case) the inference part should be offloaded - ffmpeg will still put heavy load on my (tiny) CPU.

  • Did you try a low res stream from your Camera for a start?

ItsMee

While those steps would definitely lessen the CPU core load, I would like to utilize the whole CPU and not just a single core. Itā€™s the object detection process that is using a lot of compute time. I saw in this thread that an older version allows using all cores. What is the way to install the older version?

Something we have noticed. We are only tracking people, however boxes are drawn over cars. As soon as this happensā€¦ CPU increases.

For exampleā€¦ if we start frigate and a car is parked in the driveway, it will be fine for awhile (minutes to hours) until motion happens and then the cars start being tracked. Even if the cars havenā€™t moved. Only way to bring the CPU usage back down is to restart.

This adds up when you have a couple cameraā€™s pointed at parked cars. For exampleā€¦ right now I am at 25% CPU. I just restarted frigateā€¦ and am now back to 8%. I am not tracking cars but two are in the driveway x 2 cameraā€™s.

Edit: I just walked in front of the camera and it jumped to 13%. And both cars were re detected. The CPU will slowly keep climbing with every motion. By tonight I should be around 25% again.

So, a couple feature requests if possible.

Ignore objects if they havenā€™t moved in some time
Donā€™t track (draw boxes) around objects that are not included in the config

Thanks!

Can you post our config? It shouldnā€™t be tracking cars unless they are in the config. Make sure you have a global object config defined. If you donā€™t, it will track person, car, and truck by default. I will probably remove that default in the future, and skipping detection if there is no motion is on my list already.

1 Like

That fixed it!

So my config didnā€™t have a global object at all. Only at the camera level. So I thought cars where not being tracked. I put the global object in without the car/truck settings and the CPU is staying at 8% with a walk test.

As a test. Before that change. I also moved my camera up a bit where it was seeing the road. Doing that my CPU went from 8% is 35% in about 30 minutes. So I guess it tracks every car and the CPU will get out of control fast. I did put a mask on the road. But still was catching them. Maybe needed a bigger mask. Not sure.

Would like to have cars detected. But will wait till stationary object detection is a thing.

Thanks again.

BTW, I moved my setup to a new machine (A Lenovo ThinkCentre M700 i5) running bare metal Ubuntu 20.04, and I got this in the logs as well as frequent ā€œRestarting detection processā€ messages which I didnā€™t have on the old i3 laptop:

F :1133] HandleQueuedBulkIn transfer in failed. Not found: USB transfer error 5 [LibUsbDataInCallback]

Googling around it seems as if it means the Coral is not getting enough power - I had to move USB ports to one that shows a little battery pic (assuming it means you can charge phones from it). Works now!

I also had trouble with the Coral not being picked up (ā€œNo EdgeTPU device detectedā€) ā€“ after running the getting started instructions it worked (steps 1 - 3).

Edit: I can also say that USB3 does make a massive difference over USB2:

Iā€™m trying out the ā€œsave_clipsā€ image, and I added the new config entries to my camera config to save the clips.

Config for one camera:

cameras:
  front:
    ffmpeg:
      ################
      # Source passed to ffmpeg after the -i parameter. Supports anything compatible with OpenCV and FFmpeg.
      # Environment variables that begin with 'FRIGATE_' may be referenced in {}
      ################
      input: rtsp://<redacted>/Streaming/Channels/3
      #################
      # These values will override default values for just this camera
      #################
      # global_args: []
      # hwaccel_args: []
      # input_args: []
      # output_args: []
    

    ################
    # The expected framerate for the camera. Frigate will try and ensure it maintains this framerate
    # by dropping frames as necessary. Setting this lower than the actual framerate will allow frigate
    # to process every frame at the expense of realtime processing.
    ################
    fps: 2

    ################
    # This will save a clip for each tracked object by frigate along with a json file that contains
    # data related to the tracked object. This works by telling ffmpeg to write video segments to /cache
    # from the video stream without re-encoding. Clips are then created by using ffmpeg to merge segments
    # without re-encoding. The segments saved are unaltered from what frigate receives to avoid re-encoding.
    # They do not contain bounding boxes. 30 seconds of video is added to the start of the clip. These are
    # optimized to capture "false_positive" examples for improving frigate.
    #
    # NOTE: This will only work for camera feeds that can be copied into the mp4 container format without
    # encoding such as h264. I do not expect this to work for mjpeg streams, and it may not work for many other
    # types of streams.
    #
    # WARNING: Videos in /cache are retained until there are no ongoing events. If you are tracking cars or
    # other objects for long periods of time, the cache will continue to grow indefinitely.
    ################
    save_clips:
      enabled: True
      #########
      # Number of seconds before the event to include in the clips
      #########
      pre_capture: 30

    ################
    # Configuration for the snapshots in the debug view and mqtt
    ################
    snapshots:
      show_timestamp: False

    ################
    # Camera level object config. This config is merged with the global config above.
    ################
    objects:
      track:
        - person
        - car
      filters:
        person:
          min_area: 3300
          max_area: 16000
          threshold: 0.75        
        car:
          max_area: 20000
          threshold: 0.5  

Iā€™ve also adjusted my docker-compose file to map the clips and cache folders:

version: '3.7'
services:
  frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
    shm_size: '1g' # should work for 5-7 cameras
    image: blakeblackshear/frigate:save_clips-8e78760
    volumes:
      - /dev/bus/usb:/dev/bus/usb
      - /etc/localtime:/etc/localtime:ro
      - /home/eben/Docker/frigate/config:/config
      - /home/eben/Docker/frigate/clips:/clips
      - /home/eben/Docker/frigate/cache:/cache
    ports:
      - "5000:5000"

The cache has 3 .mp4 files (one for each camera), which are all increasing in size, but I donā€™t see anything in the clips folder. In the logs I see this:

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55623137f4c0] moov atom not found
/cache/backyard-20200805094855.mp4: Invalid data found when processing input
bad file: backyard-20200805094855.mp4
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x5568ed73d4c0] moov atom not found
/cache/cars-20200805094848.mp4: Invalid data found when processing input
bad file: cars-20200805094848.mp4
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x565462c2f4c0] moov atom not found
/cache/front-20200805094840.mp4: Invalid data found when processing input
bad file: front-20200805094840.mp4

Are my cameras supplying bad data?

Is that your entire config? What are your ffmpeg parameters? vsync drop prevents the segment feature from working. Your cameras must output in a format compatible with mp4 natively. Frigate doesnā€™t decode and encode for the cache.

Nope, not the whole config. Hereā€™s the ffmpeg params:

web_port: 5000

mqtt:
  host: <redacted>
  topic_prefix: frigate
#  client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
  user: <redacted> # Optional -- Uncomment for use
  password: <redacted> # Optional -- Uncomment for use

#################
# Default ffmpeg args. Optional and can be overwritten per camera.
# Should work with most RTSP cameras that send h264 video
# Built from the properties below with:
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
#################
ffmpeg:
   global_args:
     - -hide_banner
     - -loglevel
     - panic
   hwaccel_args: 
     - -hwaccel
     - vaapi
     - -hwaccel_device
     - /dev/dri/renderD128
     - -hwaccel_output_format
     - yuv420p
   input_args:
     - -avoid_negative_ts
     - make_zero
     - -fflags
     - nobuffer
     - -flags
     - low_delay
     - -strict
     - experimental
     - -fflags
     - +genpts+discardcorrupt
     - -vsync
     - drop
     - -rtsp_transport
     - tcp
     - -stimeout
     - '10000000'
     - -use_wallclock_as_timestamps
     - '1'
   output_args:
#     - -vf
#     - mpdecimate
     - -f
     - rawvideo
     - -pix_fmt
     - rgb24

I donā€™t do anything per camera.

So yes, I see I do do the vsync drop. I removed it and immediately noticed that it is now creating multiple clips instead of one big clip per camera in the cache folder.

Is blakeblackshear/frigate:save_clips-8e78760 the best image to use for this?

EDIT: OK it is now working ā€“ thanks Blake!

Quick question ā€“ when does the cache clear up? I see it expand pretty much on all my cameras, even the ones with very little movement. I do see it clears up though.

Frigate keeps the cache around as long as there is an object being tracked within 90 seconds of the video clip. It does not handle long running events at all, so if you had a car that was being tracked for hours (even stationary), it would retain the cache for hours and ultimately assemble a clip hours long for that car before removing the cache.

Possible to setup frigate config to save snapshot to www folder so i can use that with notify services?

Maybe thereā€™s a better way, but Iā€™ve created an action that saves the snapshot prior to the notification.

- id: '1585636351359'
  alias: NOTIFY Front Door Person
  description: ''
  trigger:
  - entity_id: binary_sensor.front_door_camera_motion
    platform: state
    to: 'on'
  condition: []
  action:
  - data:
      filename: /tmp/snapshot_front_door_camera_last_person.jpg
    entity_id: camera.front_door_camera_last_person
    service: camera.snapshot
  - data:
      data:
        images:
        - /tmp/snapshot_front_door_camera_last_person.jpg
      message: Person detected at front door
      title: person
    service: notify.email

Since my frigate instance is on a completely separate host, saving the snapshot to the HA config directory wouldnā€™t be an option.

Thanks for the tip but for iOS it seems that it must be an URL.
So here is a working function after some try and errors.

        - service: notify.mobile_app_jonny_iphone_xr
          data:
            title: ""
            message: "A person was detected."
            data:
              attachment:
                url: "https://xxxxx.nabu.casa/local/tmp/door_last_person.jpg"
                content-type: png
                hide-thumbnail: false
              push:
                badge: 0

@ebendl, @blakeblackshear: I really like the idea of using frigate for object detection but keeping a ā€˜realā€™ highres video of objects detected.

Challenges:

  • Requires permanent recording for ā€˜lookbackā€™ functionallity
  • The HA stream component isnā€™t really stable for me - I guess that would be the right way. Is that just me?
  • Potentially frigate gets a ā€˜lowresā€™ stream only for performance reasons - video should be highres.
  • I didnā€™t see save_clips earlier (cool) - but from your description Iā€™m not sure its a perfect fit.
  • The save_clips videos didnā€™t play smoothly for my Foscam / Reolink - blakeblackshear said that might be part of how they are recorded.

Alternate approach:

  • Start permanent 120 second recordings, e.g. with openRTSP which doesnā€™t seem to involve transcoding (<2% CPU)
  • Start this e.g. every 60 seconds, keep a 10min history
  • use network share / memory (/dev/shm) - consider implications on required memory / disk wearout
  • Have those recordings, e.g. <camname_date_%H%M.mpg> available via http or nas mount
  • Create automation on frigate notification - e.g. send via telegram, store to nas, ā€¦
  • Logic to pick right recording to be discussed, e.g. rounddown(trigger.ts, min)

When adding openRTSP to the frigate container Iā€™m able to store the data within the container. The save_clip stuff didnā€™t play well for me - this does starting with the first keyframe.

cat Dockerfile
FROM blakeblackshear/frigate:dev
RUN apt-get update
RUN apt-get -y install livemedia-utils
#! /bin/bash

CAM=door
DATE=$(date '+%Y-%d-%m_%H:%M:%S')
openRTSP -4 -d 120 -B 10000000 -b 10000000 'rtsp://user:[email protected]:554/h264Preview_01_main' > ${CAM}_${DATE}.avi

Iā€™m still trying to get my head around how to do this best - maybe something more generic with stream plugin is better, not sure. Iā€™d assume with the ease of integration of frigate & the performance with coral this might be something more people could be interested in.

Comments / ideas welcome.

ItsMee

@ItsMee Perhaps this could meet your needs