Local realtime person detection for RTSP cameras

Never mind. Figured it out. Extra dashes were for yaml. Working now.

Just published a new beta release: https://github.com/blakeblackshear/frigate/releases/tag/v0.2.2-beta

Breaking Changes:

  • The configuration file changed significantly. Make sure you update using the example before upgrading.

Changes:

  • Added max_person_area to filter out detected persons that are too large to be real
  • Print the frame time on the image so you can see a timestamp on the last_person image
  • Allow the mqtt client_id to be set so you can run multiple instances of frigate
  • Added a basic health check endpoint
  • Added -vf mpdecimate to default output args
  • Revamped ffmpeg args configuration with global defaults that can be overwritten per camera
  • Updated docs

Image available on docker with docker pull blakeblackshear/frigate:0.2.2-beta

5 Likes

Nice update, the config changes are really clear and so far it runs nice and stable.

Thanks

I am looking for some feedback on what everyone would like to see in the next major release. Please vote in the poll below or let me know if you have additional ideas.

  • Performance enhancements for running on Raspberry Pi to lower CPU usage and support more cameras per device
  • Official ARM docker builds to support Raspberry Pi
  • Reporting on any object type (cars, etc.) available in the model
  • Dynamic regions that resize and follow detected objects (at the moment, people are often missed when they stand between regions)
  • Face detection and identification
  • Save detections for training custom models or transfer learning

0 voters

5 Likes

Got my Google Coral stick today, really looking forward to setting this up (running two 2Mpx Hikvision IP cams, some cheapie 720p no-name cams and some Wyze cams).

If you do pursue reporting on any object type, it would be great to be able to specify the object types per bounding box. If thatā€™s too tall of an order Iā€™d guess that you could just pull another RTSP stream and define the objects to detect on the additional stream(s).

Absolutely. I intend to allow everything to be set at the region, camera, or global levels. Setting a value at the camera level would override the global default, and region would override the camera default.

2 Likes

Update working fine on Unraid too, just hit the update button and edit the config file. Not too many changes.

P.S. had anyone had much success with hardware acceleration? It all seems a bit vague to me. I run pretty huge detection areas so would like to bring CPU usage down where i can.

Do you have an Intel processor? The CPU usage comes from decoding the video stream and resizing your regions. You should be able to enable hardware acceleration for ffmpeg if you have an intel processor with the example in the docs. You also want to use the stream from your camera that results in your smallest region being as close to 300 as possible. You get no additional accuracy with a higher resolution. If you just have one large region for the camera, you will get the same accuracy with a 360p video feed as you do with an 8k video feed. Using the higher resolution just makes your machine work really hard to resize the large image to 300x300px.

Just pushed up a new beta release: https://github.com/blakeblackshear/frigate/releases/tag/v0.3.0-beta

Breaking Changes:

  • Configuration file changes to support all objects in the model. See updated example.
  • Images are now served up at /<camera_name>/<object_name>/best.jpg
  • MQTT messages are published to <camera_name>/<object_name> and <camera_name>/<object_name>/snapshot

Changes:

  • Frigate now reports on every object type in the model. You can configure thresholds and min/max areas for each object type at a global, camera, or region level.

Image is available with docker pull blakeblackshear/frigate:0.3.0-beta

5 Likes

Stupid question, but where is the updated example? Is it the one linked in the readme?

1 Like

Thank you! Very excited to try this new beta

Wow, that was quick! Looking forward to trying this one.

Yep, i have intel processors. I just found that uncommenting any of the hardware acceleration features stopped it from starting.

Iā€™ll give it another go though.

A couple questions about the Docker container builds, regarding OpenCV:

  1. Do you know if there are any particular differences between the version that is compiled manually, versus the one that is provided by opencv-python? I believe they will use different versions of libraries (manual build will pull in its own copies), otherwise Iā€™m not sure.
  2. Any reasons not to use opencv-python?
  3. Any reasons not to use OpenCV 4.1? I am now using opencv-python 4.1.0.25 and it seems to work fine.

Iā€™m asking because I have a couple of Dockerfiles for Raspberry Pi. One builds OpenCV manually (mimicking the github Dockerfile as much as possible), while the other just installs opencv-python-headless (via piwheels). The OpenCV build takes forever (possibly hours) on an RPi (or even cross-compiling). Itā€™s much simpler and faster to just install the wheel.

New beta for all objects is working well, with one small thing.

If there is a car parked in the detection area it is detected and a message fired off almost continually (128 messages in 4 minutes). Is there a way to only alert on new / change?

Thanks

I donā€™t remember why I am installing from source. When I start building and publishing ARM images, I will look at optimizing it again.

If you already have your threshold at 0.5 for cars, there isnā€™t a good way to do anything about it without some significant changes.

OK - thanks for replying I was guessing that a level of refactoring would be needed - even to say ā€˜has the detection area / confidence levelā€™ changed.

Iā€™d much rather see local face recognition though :wink:

Great work - really nice project :slight_smile: