Local realtime person detection for RTSP cameras

I have connected 4 cams with 1080p. With 10fps, I have a lag of about 10s, with 5fps 5s. The delay is visible on the debug video too. CPU load of the container is about 20% @5fps.
I’m not able to find out where I lose this time. Coral “inference_speed” is beween 15 and 25ms.
How can I troubleshoot this issue?

The timestamp printed on the frame by frigate is the time it was received from ffmpeg. If you have a builtin timestamp from your camera, you can compare the two values. The difference between those values is the lag introduced before frigate even starts.

Hi Blake, is there a reason you used ffmpeg rather than just opencv? I read somewhere that opencv is faster but has less features so am assuming it’s the latter? Also, is there a way of automatically reducing the frame size upon receipt in frigate so that I can use a higher resolution stream than from a sub channel? (i.e reduce the main stream to something that does not kill the CPU?) Thanks Tim

The reason I switched to ffmpeg is because opencv does not support using hardware acceleration for decoding. In my testing at the time, ffmpeg was more performant than opencv and had less latency. I may reevaluate that at some point. Take a look at adding the scaling filter in the ffmpeg output params: https://trac.ffmpeg.org/wiki/Scaling

Does v6 or later have adjustable motion detection? I’m still struggling to get human motion on IR recognised from 0.5 onwards, doesn’t matter if using main or sub.

No adjustable motion detection yet, sorry. I made an issue and slotted it for v0.8.0 in the roadmap.

2 Likes

Thanks Blake

I’ve had much better success with the FFmpeg version than I ever did with the OpenCV version, so I’m on board with FFmpeg all the way. And it’s surprisingly low latency too, often catching the object in a second for me.

Hello,

I am trying to run container for the first time, and facing issues. I’m still a beginner at docker, so please forgive me if stupid error.

Here is the command I am running:

sudo docker run --rm --name frigate --privileged --shm-size=1g -v /dev/bus/usb:/dev/bus/usb -v /opt/frigate/config:/config:ro -v /opt/frigate/clips:/clips -v /etc/localtime:/etc/localtime:ro -p 5000:5000 -e FRIGATE_RTSP_PASSWORD='passwd' -e DEBUG="1" blakeblackshear/frigate:stable

And here is the error I am getting:

On connect called
ffprobe -v panic -show_error -show_streams -of json "rtsp://USER:PASSWD@CAMIP:554/cam/realmonitor?channel=1&subtype=0"
Starting detection process: 24
{'error': {'code': -22, 'string': 'Invalid argument'}}
Traceback (most recent call last):
  File "detect_objects.py", line 361, in <module>
    main()
  File "detect_objects.py", line 204, in main
    frame_shape = get_frame_shape(ffmpeg_input)
  File "/opt/frigate/frigate/video.py", line 39, in get_frame_shape
    video_info = [s for s in info['streams'] if s['codec_type'] == 'video'][0]
KeyError: 'streams'

I’ve checked with vlc that the rtsp stream is working.
I have a config file on docker host in /opt/frigate/config where I added cam ip and video size. Did not change much on other things.

Any possible hints? thanks

Hi Blake, I don’t think that this will solve the problem. The scaling function applies only to the output, so am not sure that this will stop frigate from using the full frame size and hence will not reduce the load. Am I right in thinking that get_frame_shape in video.py uses the ffmpeg inputs only?

You can set the width and height manually in the config to prevent frigate from trying to auto detect.

ok thanks, will give it a go…

how can I test if my installation is actually using hardware acceleration? I dont suffer much with CPU load, but i have AMD GPU in it too with Vulkan SDK and i wonder if it can be used for FFMPEG too. Probably you did the right choice with ffmpeg over opencv as overall things work fine. some ppl wrote here about delay, thats nothing to do with frigate but happens on WIFI cams. Never happens on a properly installed wired LAN, but junk WIFI cams or bad WIFI network can do this.

Looks like it requires ffmpeg 4.3. Hoping to upgrade in v0.7.0. https://github.com/blakeblackshear/frigate/pull/149

HW accel is super important, using AMD and Nvidia GPUs are must if someone play more than 5-6 cams. Damn Google Chrome also killing the linux ppl as they never want to enable HW accel in Chrome in linux… I am wondering these Jetson gadgets, could they be used for frigate with HW accel and object detection? any variation that brings in HW accel would be great

Looks like i can’t find the USB version to purchase anywhere locally, but the M2 version is only $40.

Would the M2 version work with portainer on top of Hass.io or would i need to change my hass.io installation in anyway, very keen to get this setup if possible

v0.6.0 supports any version of the Coral device. I can’t speak to whether or not it will work with Hass.io. Maybe others can chime in. I wasn’t able to get my M.2 coral device to be recognized on any of the hardware I have, so it doesn’t work with just any M.2 slot.

1 Like

Hi @blakeblackshear in my use case I send the best.jpg image to telegram for notification when frigate sends a message that says it sees a person. This usually works quite well and fast. However mostly this image is just when the person is entering the frame and isn’t the best image of the person.

Is there a method to get a short video clip of the event (maybe a couple seconds) to better see the person, instead of/in addition to the still image? This would be sent to my notification service instead of the image. It seems like this might be different than what you are doing with the clips as this would be available right away for sending, whereas the clips you are making are more for an NVR and will be the whole event.

I’m using the M.2 A+E key version with no issues, as a replacement for a WiFi card. The “gasket” driver needs to be built with DKMS for the kernel. For Docker, the /dev/apex_0 device needs to be passed in. For me, it’s more performant than the USB version.

2 Likes

Looks like there is a M.2 version with dual processors now: https://coral.ai/products/m2-accelerator-dual-edgetpu