Local realtime person detection for RTSP cameras

I just started using Frigate, and obviously to keep load down you would want to use the substream with the lower resolution.

I want to be able to get rid of Blue Iris / ZM / BlueCherry altogether which handles the high resolution recording.

With Frigate and Coral, can I have it recording from the high resolution stream from 6 cameras? I want to utilize the ‘save_clips’ feature to save any tracked motion detected.

Lots of talk about this in recent posts. See my post from yesterday: Local realtime person detection for RTSP cameras

1 Like

Awesome, looks like I am going to become a beta tester when you release this!

How can I help otherwise? Do you take donations, donate a beer, or donate to a charity of your choosing?

You don’t need a NUC. Frigate runs on lots of different hardware. Even a $10,000 CPU will be painfully slow in comparison to a $60 Coral.

I prefer Github sponsors (there is a link on the github page) because they match sponsorship dollars and don’t take a fee.

Hi @blakeblackshear
oh sorry, got super confused and probably misread the link.

Im also looking to buy hardware to try frigate but im not sure what to get for 3-4 cameras. The description on github page says great next to NUC + it has the smallest inference speed so i thought that is the best option.

But i wonder what would be the optimum option? Would somethig like Minisforum GK41 + 1 or 2 coral accelerators do a good job? And if using external hdd via USB3.0 is it going to slow down the performance?

I think you are going way overboard. For 3-4 cameras all you really need is a Raspberry Pi 4 Model B and a Google Coral. The idea is that the Google Coral will do a majority of the work so you don’t really need much compute power.

The hard drive really only comes into play when saving clips, from a quick search (and I could be wrong) Frigate is never touching disk as part of the motion detection process.

And when saving clips, external USB 3.0 HDD will be just fine.

thanks @rayzorben this all makes sense! Do you think rpi4b+coral is going to be able to do 5fps or going to be significantly slower?

It depends on the resolution of the cameras too. If it is dedicated for frigate, an RPi4 should be able to manage 3-4 1080p cameras fine, but you have to worry about SD card wear, etc. The next version will use USB3 for recordings, but I am not sure if that will impact performance yet. When I am testing the next beta version, I can test 4 1080p cameras with all features enabled to see how it performs on an RPi4. You may want to hold off until then.

I am a big fan of the minisforum box I listed in the Readme. It is my goto recommendation for running homeassistant, frigate, and other services. It easily handles 4 1080p cameras for detection and 24/7 recording of 2K streams from Dahua cameras. I also love the extra gigabit port because it makes it easy to setup the cameras on a dedicated private network with static IPs. I set this up for someone and it has been flawless.

1 Like

I see thanks!
Would love to know the results of your tests with rpi4!

Dear @blakeblackshear
I use the armv7 docker container for a while processing 5 substreams 480x720 of my 8MPx cameras. Its working fine so far taking about 60-150% of CPU and 200 MByte memory. This runs well besides a influx db and HA itself (plus some other docker containers). I use Coral and Raspi HW acceleration.

To get it run with 5 cameras, I had to enlarge the gpu memory for that:
So I changed add the value at the end of the /boot/config.txt:

[all]
gpu_mem=192  # for at least 5 cameras

I have two questions:
I currently use a substream h.264 with 20 FPS. In the config I fixed it to 5 FPS. Is it better to reduce the substream to 5 FPS directly in terms of quality and performance?
Do you plan to create a amv7 docker container for the future release as a kind of a beta soon? I am interessted in…

I will include all image versions in the beta when it’s ready.

Thanks
Whats about the FPS topic? Do you have a recommendation for me?
Does it make sense to update the documentation for Raspi if somebody getting into problems using the hw acceleration of the cpu?

It is better to adjust the frame rate on the camera if you can. The FPS setting is for the output of ffmpeg, but it still has to decode 20 fps on the input side. Reducing that to 5 fps on the camera should reduce CPU usage of ffmpeg a meaningful amount.

Thanks for you insides!

@blakeblackshear i have the latest.jpg in 1920x1080 and i added this line to mask out an area. However it seems i have detection inside of it too.
mask: poly ‘1203,675,1153,480,1167,324,1177,209,1197,75,1209,3,1696,3,1703,220,1649,337,1416,653’

do you think my mask is correct the above way? Also if i change to mask to zone i dont see lines that showing the area. What am i missing?

draw_zones: True

thanks

Check your formatting against the config in the readme again. Looks like you have single quotes around your coordinates. Note that the mask is only a motion mask. Objects will still be detected if they originate in an unmasked area and are tracked into a masked area.

double quotes did not help, i used the `/<camera_name>/latest.jpg photo as input file to generate the coordinates but when i want to watch the feed on /<camera_name> this picture is smaller resolution, then how can the zone be correct when it was created for the fullsize latest.jpg ?

I didn’t suggest to use double quotes. If you look at the example config in the readme, you will see that there are no quotes. The feed can be any size, you can pass your desired size to the endpoint as shown in the docs.

logged in to my synology nas with the admin user and now every works…
cant clarify this, but he, it works now :wink: