Local realtime person detection for RTSP cameras

@blakeblackshear I have rencently disabled draw zones, and still have timestamps enabled for each camera in config file. When both set to True, I am getting timestamps. When draw_zones disabled, no timestamps.
Using rc2. Not seeting anything here which could explain: https://github.com/blakeblackshear/frigate/pull/231

Tested with same config file on stable, and timestamps work as expected. Maybe I missed something in the release note…

    snapshots:
      show_timestamp: True
      draw_zones: False

It was from a long time that I wanted to try your component…
Well done! installing took a little hammering here and there (but mainly due tom my config) but finally runs.

Please be aware that in the github instructions docker command thre is a missing - in the docker run snippet in the name option (i.e. there is -name instead of --name).

Thanks. That is fixed in the dev branch already.

nice hardware with tons of rams.

you should try enabling hard acceleration, cpu usage should drop by a significant amount.

LXC conf settings:

lxc.cgroup.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

docker run command:

docker run --name frigate --privileged --shm-size=1g -v /opt/frigate/config:/config:ro -v /opt/frigate/clips:/clips:rw -v /etc/localtime:/etc/localtime:ro -v /dev/bus/usb:/dev/bus/usb --device=/dev/dri/renderD128 -d -p 5000:5000 -e FRIGATE_RTSP_PASSWORD='password' blakeblackshear/frigate:stable

I am not drawing on the snapshots anymore in 0.7.x. It is only on the debug mjpeg feed. I will be adding It back before the final version.

@blakeblackshear ok, I had seen this. but for me, “drawing” means the colored rectangles, not the timestamps :slight_smile:

@zeliant I will try what you suggest. But I have a single graphics card (the intel gpu), and I have to virtualize it to be used by multiple clients. I cannot just simply enable GPU passthrough. Yesterday for example, I installed Jellyfin (great program BTW) inside docker running on VM. I enabled GVT-g on this VM, and HW acceleration works fine. This is why I wanted to run frigate inside VM and not LXC, because I think that LXC cannot enable GVT-g.

My i7 7700 gets a little bit stressed (3 cameras)
How can I lower cpu usage apart external accelerators ?
I already entered QuickSync parameters .
Apologize as I guess it has been asked already in this megathread.

Also would it be possible to trigger recognition just on demand ? My 3 cameras have highly reactive motion detection alarms (which gives a ton of false positives) so triggering recognition would be really a perfect solution

I assume your LXC is running in dom0. In that case you don’t need GVT-g, I think it should work fine as described and still coexist with other VMs using GVT-g.

Let me try to rephrase what you said:
Since LXC running on top of proxmox host, don’t need virtualisation, and can passthrough GPU to LXC while maintaining GVT-g for VMs?

I just tried and it seems it still works. I am running zoneminder in docker VM which which has GVT-g enabled, and ffmeg on 3 streams @1080p seems capable of using vaapi (while using roughly 20% CPU).

This helped me greatly, thank you.

I do have an issue with detect_objects.py erroring out because it cant find plasma store if run from crontab at reboot, however if i manually run the py file from terminal then everything works as expected.

my cron is as follows but have tried a multitude of variants with same outcome.

@reboot sleep 10 ; /usr/bin/python3.7 -u /opt/frigate/detect_objects.py

the error i recieve is

On connect called
Traceback (most recent call last):
  File "/opt/frigate/detect_objects.py", line 460, in <module>
    main()
  File "/opt/frigate/detect_objects.py", line 175, in main
    plasma_process = start_plasma_store()
  File "/opt/frigate/detect_objects.py", line 68, in start_plasma_store
    plasma_process = sp.Popen(plasma_cmd, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
  File "/usr/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.7/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'plasma_store': 'plasma_store'

if anyone can help, I’d be very grateful.

Just switched over to this project from DOODS. I had an extra 4gb RPi4 lying around so i decided to give the new dev build a try after seeing it was available. With six substreams and a coral, I’m only seeing ~30% CPU utilization, 18ms coral speed, and 7-10fps detection speed. Most cameras look like they are detecting at around 5-8fps.

I’m just blown away at how well this performs. What an amazing project.

4 Likes

frame=18667 fps= 15 q=-0.0 size=22176396kB time=00:20:44.46 bitrate=145981.4kbits/s dup=151 drop=150 speed= 1x
frame=18570 fps= 15 q=-0.0 size=22061160kB time=00:20:38.00 bitrate=145981.4kbits/s dup=85 drop=84 speed= 1x
frame=18603 fps= 15 q=-0.0 size=22100364kB time=00:20:40.20 bitrate=145981.4kbits/s dup=154 drop=153 speed= 1x
Past duration 0.679985 too large

Past duration 0.989998 too large

I keep getting these in the logs?

Any ideas?

Updated Frigate and got this…

Traceback (most recent call last):
File “detect_objects.py”, line 462, in
main()
File “detect_objects.py”, line 238, in main
frame_shape = get_frame_shape(ffmpeg_input)
File “/opt/frigate/frigate/video.py”, line 40, in get_frame_shape
video_info = [s for s in info[‘streams’] if s[‘codec_type’] == ‘video’][0]
KeyError: ‘streams’

Updated to 0.7.0-rc 2, but my cameras look pretty horrible:

image

Guessing this is ffmpeg related, here’s my config:

ffmpeg:
   global_args:
     - -hide_banner
     - -loglevel
     - panic
   hwaccel_args: 
     - -hwaccel
     - vaapi
     - -hwaccel_device
     - /dev/dri/renderD128
     - -hwaccel_output_format
     - yuv420p
   input_args:
     - -avoid_negative_ts
     - make_zero
     - -fflags
     - nobuffer
     - -flags
     - low_delay
     - -strict
     - experimental
     - -fflags
     - +genpts+discardcorrupt
#     - -vsync
#     - drop
     - -rtsp_transport
     - tcp
     - -stimeout
     - '10000000'
     - -use_wallclock_as_timestamps
     - '1'
   output_args:
#     - -vf
#     - mpdecimate
     - -f
     - rawvideo
     - -pix_fmt
     - rgb24

Please read the release notes before upgrading.

Default output_args for cameras have changed. If you specified custom output parameters, you will need to update.

I would recommend removing every section under ffmpeg in your config other than hwaccel_args.

1 Like

Just published rc3.

Changes:

  • Prevent zone status from bouncing while object is in zone
  • Add timestamps to snapshot images
  • Allow bounding boxes to be drawn on snapshots with draw_bounding_boxes option
2 Likes

Thanks Blake. I read through it but somehow missed that part.

Will give it a shot.

Thanks, Blake - removing all the FFMPEG options except for the hardware acceleration options did the trick.

I had the same and removed FFMPEG options as well but still not working. Can you share your config please?

My global FFMPEG options now look like this:

ffmpeg:
#   global_args:
#     - -hide_banner
#     - -loglevel
#     - panic
   hwaccel_args: 
     - -hwaccel
     - vaapi
     - -hwaccel_device
     - /dev/dri/renderD128
     - -hwaccel_output_format
     - yuv420p
#   input_args:
#     - -avoid_negative_ts
#     - make_zero
#     - -fflags
#     - nobuffer
#     - -flags
#     - low_delay
#     - -strict
#     - experimental
#     - -fflags
#     - +genpts+discardcorrupt
##     - -vsync
##     - drop
#     - -rtsp_transport
#     - tcp
#     - -stimeout
#     - '10000000'
#     - -use_wallclock_as_timestamps
#     - '1'
#   output_args:
##     - -vf
##     - mpdecimate
#     - -f
#     - rawvideo
#     - -pix_fmt
#     - rgb24

while my local, per-camera ones are all the same:

cameras:
  front:
    ffmpeg:
      ################
      input: rtsp://<username>:<password>@<host>/Streaming/Channels/3

These are all hikvision cameras, either 2nd or 3rd stream.

1 Like