One way you can do it is take a copy of the image from the live stream, paste it into a graphics program (e.g. Gimp), and use the selection tool or equivalent to draw out the optimal box size and it should display the coordinates etc
The mask is optional just incase you didn’t know. I’ve not used it
Easiest way I believe is to open the live stream on something portable and walk into the camera view. The detected object size is printed on the boundary box in real time. You might want to lower it initially.
Live view just incase you had not seen it http://host:5000/<camera_name>
Thanks man, i couldnt have got through this without the help! Purely due to my lack of skills:) But I did learn quite a bit. Ive now got 3 cameras setup, regions and all! Seems very stable, very fast, and very accurate…by far the best recognition ive ever played with! Going to start setting some automations up now, thanks again I wish i could buy you guys a beer or coffee or steak lol.
Hey quick question, i apologize if this isnt the correct spot but i want to make sure i dont break this:)
There is another docker project that detects cars using the coral, is it possible to have 2 docker containers both using the same coral? Im going to check if you can use two corals on the same machine in case.
Oh ok, thank you for the quick response. Ill just hang tight…id offer my help but i know id just get in your way…if theres i can do though please let me know.
Gotta say thank you again. This is solid I’ve had it running this whole time with 3 cameras and I’ve turned all my motion alerts off from sensors doorbells nvr and just have homeassistant notify my cell a person alert with best picture. Works flawless and damn near instant. I get that within 2 seconds every time even at night any camera. Thanks again for sharing dude your my hero.
+1 for face detection. It’d be awesome to use frigate to detect a person and face and then pass on that image to another Docker container for face recognition.
I’m dreaming of automations that could send a notification saying “Bill and Sally are at your front door.” in just a couple of seconds!
I didn’t see this problem reported elsewhere in this forum thread or on discord. Sorry if I missed it. Looks like a number of folks are able to run a pre-built image on their RPI4, but that’s not my case.
@blakeblackshear Thank you for running this great project. Please let me know if you see any issues with me sharing an unofficial rpi4 image while you are working on the official release.
@uid0 It looks like you have a better setup of your RPI4 than I. Here are my benchmark.py results for RPI4 4GB RAM, 32GB SDD with Rasbian 10 Buster with a freshly built frigate docker image from source:
Coral on USB3: 24ms consistently between multiple runs
Coral on USB2: 51ms consistently between multiple runs
EDIT: After turning off all homeassistant related processes the Coral USB 3 inference benchmark drops to 19-20ms.