Local realtime person detection for RTSP cameras

Was that for me?
If so, here’s my updated ffmpeg using you suggestion.

ffmpeg -loglevel verbose -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format yuv420p -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -vsync drop -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:[email protected]/live -vf transpose=1 -f rawvideo -pix_fmt rgb24 pipe:

What does that do and what should I see? lower CPU… etc?

Welcome back! Thank you for continuing your work on this! I’m running the 0.5.1-rc3 docker image.

After a day or two both of my cameras end up stuck again in Frigate.

I can see in the logs that Frigate tried to restart them on the 19th (in the logs) but it never seems to have restarted correctly. They’re both stuck on the same frame from two days ago when I view them under the debug view. I printed out the stacktrace for one of the camera PIDs. Any clues as to what might be happening, or what I can do to help debug?

(Logs +) Stacktrace for PID 53 (Camera)

2020-04-19T19:50:34.878070825Z Camera_process started for front_window: 52
2020-04-19T19:50:34.879669067Z Starting process for front_window: 52
2020-04-19T19:50:34.880408145Z Camera_process started for kitchen: 53
2020-04-19T19:50:34.882295102Z Starting process for kitchen: 53
2020-04-19T19:50:34.895233017Z  * Serving Flask app "detect_objects" (lazy loading)
2020-04-19T19:50:34.895262905Z  * Environment: production
2020-04-19T19:50:34.895271920Z    WARNING: This is a development server. Do not use it in a production deployment.
2020-04-19T19:50:34.895313181Z    Use a production WSGI server instead.
2020-04-19T19:50:34.895329698Z  * Debug mode: off
2020-04-19T21:34:30.189080017Z Detection appears to be stuck. Restarting detection process
2020-04-19T21:34:30.189153304Z Waiting for detection process to exit gracefully...
2020-04-19T21:34:30.193837621Z /arrow/cpp/src/plasma/store.cc:738: Disconnecting client on fd 6
2020-04-19T21:34:30.198487157Z Starting detection process: 4952
2020-04-19T22:02:51.639817592Z Detection appears to be stuck. Restarting detection process
2020-04-19T22:02:51.640062443Z Waiting for detection process to exit gracefully...
2020-04-19T22:02:51.653388253Z /arrow/cpp/src/plasma/store.cc:738: Disconnecting client on fd 6
2020-04-19T22:02:51.657573892Z Starting detection process: 6282
2020-04-21T15:02:15.361813649Z   File "detect_objects.py", line 345, in <module>
2020-04-21T15:02:15.361926332Z     main()
2020-04-21T15:02:15.361959646Z   File "detect_objects.py", line 226, in main
2020-04-21T15:02:15.361987049Z     camera_process['process'].start()
2020-04-21T15:02:15.362012862Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 112, in start
2020-04-21T15:02:15.362040043Z     self._popen = self._Popen(self)
2020-04-21T15:02:15.362065738Z   File "/usr/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
2020-04-21T15:02:15.362136637Z     return _default_context.get_context().Process._Popen(process_obj)
2020-04-21T15:02:15.362162303Z   File "/usr/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
2020-04-21T15:02:15.362185427Z     return Popen(process_obj)
2020-04-21T15:02:15.362206838Z   File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
2020-04-21T15:02:15.362230287Z     self._launch(process_obj)
2020-04-21T15:02:15.362253211Z   File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 74, in _launch
2020-04-21T15:02:15.362277139Z     code = process_obj._bootstrap()
2020-04-21T15:02:15.362299507Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
2020-04-21T15:02:15.362323081Z     self.run()
2020-04-21T15:02:15.362344309Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
2020-04-21T15:02:15.362368016Z     self._target(*self._args, **self._kwargs)
2020-04-21T15:02:15.362389778Z   File "/opt/frigate/frigate/video.py", line 288, in track_camera
2020-04-21T15:02:15.362412560Z     region_detections = object_detector.detect(tensor_input)
2020-04-21T15:02:15.362434109Z   File "/opt/frigate/frigate/edgetpu.py", line 125, in detect
2020-04-21T15:02:15.362456385Z     self.detection_queue.put(now)
2020-04-21T15:02:15.362478065Z   File "/usr/lib/python3.7/multiprocessing/queues.py", line 364, in put
2020-04-21T15:02:15.362501428Z     self._writer.send_bytes(obj)
2020-04-21T15:02:15.362523727Z   File "/usr/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
2020-04-21T15:02:15.362546679Z     self._send_bytes(m[offset:offset + size])
2020-04-21T15:02:15.362569343Z   File "/usr/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
2020-04-21T15:02:15.362592833Z     self._send(header + buf)
2020-04-21T15:02:15.362615238Z   File "/usr/lib/python3.7/multiprocessing/connection.py", line 368, in _send
2020-04-21T15:02:15.362637822Z     n = write(self._handle, buf)

Stacktrace for the PID 6282 (Coral)

2020-04-21T15:11:46.088297658Z   File "/usr/lib/python3.7/threading.py", line 890, in _bootstrap
2020-04-21T15:11:46.088351837Z     self._bootstrap_inner()
2020-04-21T15:11:46.088367114Z   File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
2020-04-21T15:11:46.088379308Z     self.run()
2020-04-21T15:11:46.088390145Z   File "detect_objects.py", line 98, in run
2020-04-21T15:11:46.088401370Z     self.tflite_process.start_or_restart()
2020-04-21T15:11:46.088412543Z   File "/opt/frigate/frigate/edgetpu.py", line 108, in start_or_restart
2020-04-21T15:11:46.088423928Z     self.detect_process.start()
2020-04-21T15:11:46.088434357Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 112, in start
2020-04-21T15:11:46.088445725Z     self._popen = self._Popen(self)
2020-04-21T15:11:46.088455881Z   File "/usr/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
2020-04-21T15:11:46.088466995Z     return _default_context.get_context().Process._Popen(process_obj)
2020-04-21T15:11:46.088477630Z   File "/usr/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
2020-04-21T15:11:46.088488581Z     return Popen(process_obj)
2020-04-21T15:11:46.088499729Z   File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
2020-04-21T15:11:46.088511156Z     self._launch(process_obj)
2020-04-21T15:11:46.088521721Z   File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 74, in _launch
2020-04-21T15:11:46.088533398Z     code = process_obj._bootstrap()
2020-04-21T15:11:46.088544383Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
2020-04-21T15:11:46.088555612Z     self.run()
2020-04-21T15:11:46.088566245Z   File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
2020-04-21T15:11:46.088577975Z     self._target(*self._args, **self._kwargs)
2020-04-21T15:11:46.088589062Z   File "/opt/frigate/frigate/edgetpu.py", line 71, in run_detector
2020-04-21T15:11:46.088600143Z     object_id_str = detection_queue.get()
2020-04-21T15:11:46.088610855Z   File "/usr/lib/python3.7/multiprocessing/queues.py", line 351, in get
2020-04-21T15:11:46.088621694Z     with self._rlock:
2020-04-21T15:11:46.088631620Z   File "/usr/lib/python3.7/multiprocessing/synchronize.py", line 95, in __enter__
2020-04-21T15:11:46.088642857Z     return self._semlock.__enter__()

/debug/stats

{
  "coral": {
    "detection_start": 0,
    "fps": 0,
    "inference_speed": 8.01,
    "pid": 6282
  },
  "front_window": {
    "camera_fps": 0,
    "detection_fps": 0,
    "ffmpeg_pid": 41,
    "frame_info": {
      "detect": 1587341586.383989,
      "process": 1587341516.53733,
      "read": 1587362375.199723
    },
    "pid": 52,
    "process_fps": 0.1,
    "read_start": 0,
    "skipped_fps": 0
  },
  "kitchen": {
    "camera_fps": 11.9,
    "detection_fps": 0,
    "ffmpeg_pid": 50,
    "frame_info": {
      "detect": 1587341465.266168,
      "process": 1587341335.084925,
      "read": 1587481598.93786
    },
    "pid": 53,
    "process_fps": 0.1,
    "read_start": 0,
    "skipped_fps": 11.9
  },
  "plasma_store_rc": null
}

ffmpeg config:

ffmpeg:
  global_args:
    - -hide_banner
    - -loglevel
    - panic
  hwaccel_args:
    - -hwaccel
    - vaapi
    - -hwaccel_device
    - /dev/dri/renderD128
    - -hwaccel_output_format
    - yuv420p
  input_args:
    - -avoid_negative_ts
    - make_zero
    - -fflags
    - nobuffer
    - -flags
    - low_delay
    - -strict
    - experimental
    - -fflags
    - +genpts+discardcorrupt
    - -vsync
    - drop
    - -use_wallclock_as_timestamps
    - '1'
  output_args:
    - -f
    - rawvideo
    - -pix_fmt
    - rgb24

having the same problem, only on 2 out of my 4 camera, the 2 that i have problem with is reolink and im using rtmp on them instead of rtsp. the other 2 is hikvision and had no problem with them.

but same scenario as you, they stuck on same frame and wont work before i restart frigate.

Yep. Reolink cameras over rtmp here.

After trying the code above I get a strange line interleaved image.doorway-3

Did you update your width and height values for portrait orientation? You would need to swap the width & height values too. If you didn’t set them, you may need to for this use case.

Struggling to get some cameras to use coral, some do, some dont. Also, after a while inference jumps back up to 400 on the devices that do appear to use the coral.

If you don’t see “No EdgeTPU detected. Falling back to CPU.” in your logs, it is using the Coral. It wouldn’t work at all otherwise. Also, all cameras share the same detection process, so it’s not possible for some cameras to use the Coral and others to use the CPU. If your inference is jumping up to 400, my guess is that you should be seeing “No EdgeTPU detected. Falling back to CPU.” in your logs after the detection process is restarted. That means that Tensorflow doesn’t detect that your Coral is plugged in anymore. If you don’t see that message, something else is making your Coral slow. It could be USB bus speeds or some other hardware issue.

It looks like there is a deadlock in Python’s SimpleQueue. I suspect it may have something to do with how often the processes are restarting. I have some ideas for ways I can reproduce so I can try some different fixes.

2 Likes

I assume this has been discussed before, but is there anyway i block a region in an image, i’m getting false positives on some pictures of my children on the wall when the lighting changes.

Thanks

You can set up a mask.

‘’’
################
## Optional mask. Must be the same aspect ratio as your video feed.
##
## The mask works by looking at the bottom center of the bounding box for the detected
## person in the image. If that pixel in the mask is a black pixel, it ignores it as a
## false positive. In my mask, the grass and driveway visible from my backdoor camera
## are white. The garage doors, sky, and trees (anywhere it would be impossible for a
## person to stand) are black.
##
## Masked areas are also ignored for motion detection.
################
# mask: back-mask.bmp
‘’’

here’s an example from the repo

1 Like

Thank you. I’ll take a look at that.

I’ve made a couple of masks, and assigned them to a couple of cameras. Thank you.

I haven’t been able to reproduce this intentionally yet, but I did see it in real world once. It took 5 days for my detection queue to get in a bad state, and the only way to fix it was to completely restart the container. I have been running a modified version for 12 hours that randomly triggers failures of each camera and the detection process frequently. It is processing 4 simulated cameras at 50fps each (200fps total), and it hasn’t failed yet. I am going to try changing some of the failure handling in the detection process, but I won’t know if it works until others try it.

2 Likes

Hi @blakeblackshear. In response to your previous message above about capturing video. My wife and I were awakeNed last night due to an errant spider that was 96% a person. My sizes were not applicable because it was so close to the lens it looked much bigger than it was. It really looks nothing like a human. More of a moving white dot since the IR was reflecting off of it. I captured a video clip. Is there somewhere I can send it to you?

You can upload to google drive and send me a link as a private message.

I’ve tried both directions with the W and H values with no change.

  doorway:
    ffmpeg:
      input: rtsp://viewer:[email protected]:554/Streaming/Channels/401
      output_args:
        - -vf
#        - mpdecimate,rotate=PI/2
        - "transpose=1"
        - -f
        - rawvideo
        - -pix_fmt
        - rgb24
#      height: 480
#      width: 640
      height: 640
      width: 480
    take_frame: 1
    fps: 5
    snapshots:
      show_timestamp: True
    objects:
      track:
        - person
      filters:
        person:
          min_area: 5000
          max_area: 100000
          threshold: 0.5

Your width and height values are indented one level too far. They should line up with take_frame.

Thank you, that was the ticket. Shall I assume this pic is the best I can hope for with the horizontal being stretched? At least I can see all of the top and bottom now. I tried narrowing the width but the interlacing problem returns. I’m assuming it’s an aspect ratio issue.
doorway-9

      output_args:
        - -vf
#        - mpdecimate,rotate=PI/2
        - "transpose=1"
        - -f
        - rawvideo
        - -pix_fmt
        - rgb24
    height: 1920
    width: 1080

For now, yes. It won’t impact the detection, but the preview is currently hard coded to 16:9. The next RC will fix that.

1 Like

Thank you for all your help. I really appreciate the work you’ve put into this awesome project.