Local realtime person detection for RTSP cameras

where is that from?

HA Discrodā€¦ https://discordapp.com/channels/330944238910963714/575024314525548554/608435102287921162

oh cool. thanks for sharing dude. i just got one to play with

It would be very useful to be able to specify the object to be detected on a per region level.

1 Like

I didnā€™t see this problem reported elsewhere in this forum thread or on discord. Sorry if I missed it. Looks like a number of folks are able to run a pre-built image on their RPI4, but thatā€™s not my case.

Pulling the latest docker image from
https://hub.docker.com/r/blakeblackshear/frigate

with

docker pull blakeblackshear/frigate

and running it as documented

produces the following error on my RPI4 with ā€˜Raspbian GNU/Linux 10 (buster)ā€™.

standard_init_linux.go:211: exec user process caused "exec format error"

This seems to be an error due to architecture mismatch. Is the pre-built image supposed to run on RPI4?

No prebuilt images for pi. You need to build it yourself :slightly_smiling_face:
Command is on the readme ( docker build -t frigate . )

Comment out i965-va-driver from dockerfile

Thank you @mr-onion. That worked.

Sharing my docker image for RPI4 Buster in case it may save the next person a few hours.
https://hub.docker.com/r/ivelin/frigate

docker pull ivelin/frigate:unofficial_pi4_aug_20_2019

@blakeblackshear Thank you for running this great project. Please let me know if you see any issues with me sharing an unofficial rpi4 image while you are working on the official release.

@uid0 It looks like you have a better setup of your RPI4 than I. Here are my benchmark.py results for RPI4 4GB RAM, 32GB SDD with Rasbian 10 Buster with a freshly built frigate docker image from source:

Coral on USB3: 24ms consistently between multiple runs
Coral on USB2: 51ms consistently between multiple runs

EDIT: After turning off all homeassistant related processes the Coral USB 3 inference benchmark drops to 19-20ms.

I bought a second RP4 and coral for this ā€“ might take a long while to put it together, but that is the goal.

1 Like

4 Gigs of memory or less?

Yes, 4GB. Updated my post to clarify.

@mr-onion Thank you for sharing these parameters. They did have the described positive effect on my system.

On a related note, I experimented with a version that processes only keyframes (reference frames) and it looks promising. CPU came down from 40% to 8% on RPI4.

-skip_frame nokey

These frames have two interesting properties:

  1. They are full camera image capture and have no compression artifacts
  2. They usually have sufficient change in the picture (high entropy) such that compression techniques referencing delta from the previous frame arenā€™t effective. Implying that there may be something new in the picture worth looking at.

More on this here.

If someone is interested to test and confirm or suggest something different, please share your findings. Iā€™ll keep an eye.

I went ahead and released 0.2.1: https://github.com/blakeblackshear/frigate/releases/tag/v0.2.1

  • Push best person images over MQTT for more realtime updates in homeassistant
  • Attempt to gracefully terminate the ffmpeg process before killing
  • Tweak the default input params to discard corrupt frames and ignore timestamps in the video feed
  • Increase the watchdog timeout to 10 seconds
  • Allow ffmpeg_input_args, ffmpeg_output_args, and ffmpeg_log_level to be passed in the config for customization
7 Likes

Thanks for all the hard work! Maybe Iā€™ve missed it, but is there any documentation on how to use the best person images that come over MQTT?

edit - derp! I see the documentation now includes this. Disregard, thanks!

Cool, nice update. Great work! Quick question, does best person ignore the imagemap? I see a lot of humans that should have been weeded out by the imagemap in best_personā€¦

The best_person.jpg always returns the most recent person detected, even if they didnā€™t trigger the threshold. The image is pushed to MQTT at the same time the person sensor crosses the threshold. You shouldnā€™t have the same issue with the MQTT camera.

1 Like

I finally got my RPi4, and I just pushed up a new multi architecture docker image that works on x86 and ARM. You can now pull blakeblackshear/frigate:0.2.2-beta instead of building yourself on the RPi4. Now I need to get hardware acceleration working to reduce CPU usage.

2 Likes

Awesome,thanks man!! Iā€™m gonna hook it up tonight to a 2g. Did you get a 4g? I just seen today the microcenter has 4g in stock nowā€¦Iā€™m guessing it would help with this.

I have a 4g, but I wouldnā€™t expect any benefits for frigate due to the extra 2g of RAM. Itā€™s all about the USB3 speeds and CPU. Benchmark script clocks in at about ~16ms for me.

oh alright that works. Are you running buster? I might try one of those slimmed down os.