Local realtime person detection for RTSP cameras

Congrats!

  1. One way you can do it is take a copy of the image from the live stream, paste it into a graphics program (e.g. Gimp), and use the selection tool or equivalent to draw out the optimal box size and it should display the coordinates etc

  2. The mask is optional just incase you didn’t know. I’ve not used it

  3. Easiest way I believe is to open the live stream on something portable and walk into the camera view. The detected object size is printed on the boundary box in real time. You might want to lower it initially.

Live view just incase you had not seen it http://host:5000/<camera_name>

Thanks man, i couldnt have got through this without the help! Purely due to my lack of skills:) But I did learn quite a bit. Ive now got 3 cameras setup, regions and all! Seems very stable, very fast, and very accurate…by far the best recognition ive ever played with! Going to start setting some automations up now, thanks again I wish i could buy you guys a beer or coffee or steak lol.

Hey quick question, i apologize if this isnt the correct spot but i want to make sure i dont break this:)

There is another docker project that detects cars using the coral, is it possible to have 2 docker containers both using the same coral? Im going to check if you can use two corals on the same machine in case.

thanks again guys!

It is not possible to share one coral. There is already a feature request to add cars to frigate. I may be able to add it in the next release.

1 Like

Oh ok, thank you for the quick response. Ill just hang tight…id offer my help but i know id just get in your way…if theres i can do though please let me know.

Gotta say thank you again. This is solid I’ve had it running this whole time with 3 cameras and I’ve turned all my motion alerts off from sensors doorbells nvr and just have homeassistant notify my cell a person alert with best picture. Works flawless and damn near instant. I get that within 2 seconds every time even at night any camera. Thanks again for sharing dude your my hero.

1 Like

+1 for face detection. It’d be awesome to use frigate to detect a person and face and then pass on that image to another Docker container for face recognition.

I’m dreaming of automations that could send a notification saying “Bill and Sally are at your front door.” in just a couple of seconds!

Thanks for all your hard work @blakeblackshear.

Any chance this works with jetson nano also?

I saw this a few days ago re jetson nano -

where is that from?

HA Discrod… https://discordapp.com/channels/330944238910963714/575024314525548554/608435102287921162

oh cool. thanks for sharing dude. i just got one to play with

It would be very useful to be able to specify the object to be detected on a per region level.

1 Like

I didn’t see this problem reported elsewhere in this forum thread or on discord. Sorry if I missed it. Looks like a number of folks are able to run a pre-built image on their RPI4, but that’s not my case.

Pulling the latest docker image from
https://hub.docker.com/r/blakeblackshear/frigate

with

docker pull blakeblackshear/frigate

and running it as documented

produces the following error on my RPI4 with ‘Raspbian GNU/Linux 10 (buster)’.

standard_init_linux.go:211: exec user process caused "exec format error"

This seems to be an error due to architecture mismatch. Is the pre-built image supposed to run on RPI4?

No prebuilt images for pi. You need to build it yourself :slightly_smiling_face:
Command is on the readme ( docker build -t frigate . )

Comment out i965-va-driver from dockerfile

Thank you @mr-onion. That worked.

Sharing my docker image for RPI4 Buster in case it may save the next person a few hours.
https://hub.docker.com/r/ivelin/frigate

docker pull ivelin/frigate:unofficial_pi4_aug_20_2019

@blakeblackshear Thank you for running this great project. Please let me know if you see any issues with me sharing an unofficial rpi4 image while you are working on the official release.

@uid0 It looks like you have a better setup of your RPI4 than I. Here are my benchmark.py results for RPI4 4GB RAM, 32GB SDD with Rasbian 10 Buster with a freshly built frigate docker image from source:

Coral on USB3: 24ms consistently between multiple runs
Coral on USB2: 51ms consistently between multiple runs

EDIT: After turning off all homeassistant related processes the Coral USB 3 inference benchmark drops to 19-20ms.

I bought a second RP4 and coral for this – might take a long while to put it together, but that is the goal.

1 Like

4 Gigs of memory or less?

Yes, 4GB. Updated my post to clarify.