Local realtime person detection for RTSP cameras

What is the current hardware specifications of your setup to run 4 cameras successfully?

A 4 core i7 laptop. Iā€™m only running 2 zones per feed thoughā€¦ But works pretty well. Way better than my Pi running the standard homeassistant tensorflow component at least.

Does it work on regular Hassio? Iā€™m running Proxmox and Hassio image of on the NUC I got on ebay (NUC5i5RYK), so is there a simple way to install it? I have two Reolink cameras that I would like to use with Tensorflow but this is still new to me and Iā€™m not really sure how to install it. Any help would be appreciated.

The next version will require a coral device. The python libraries used for object detection are separate for the coral and donā€™t work with a CPU. Also, because it reduces CPU usage so much, I am able to run detection in a thread rather than a separate process and skip motion detection all together. I may add support for CPU based detection with motion detection again in a future version, but I donā€™t expect to ever use it again.

1 Like

Well the current CPU version works pretty well, you should at least leave it available for people who for whatever reason donā€™t want to use Coral (though I am not one of them). And I do think that motion detection still has some value due to reducing multiple detections of the same object by different zones at the same time.

1 Like

The old version will still be available. Just not sure how much support it will get. The coral leans toward a very different architecture, and I will need to update my diagram quite a bit. I do plan to try and incorporate the motion detection again as itā€™s possible you would want to see motion events where no objects were detected. Ultimately I want to remove regions altogether, scan for objects in the whole image with some kind of mask, and track detected objects as they move.

1 Like

That sounds like a pretty great way to do itā€¦ Iā€™m sure you will come to a very nice solution in the end, and itā€™s already working better than what I had before. Coral support will put this component heads and shoulders above the other options, and it will just keep getting better from there. Iā€™m excited to watch the progress.

I have multi camera support working now. Just need to cleanup the code and update the docs.

3 Likes

I was able to wrap everything up this morning before my kids woke up. Latest version is ready. Hopefully some of you have a Google Coral already.

Note that the readme was updated with some significant changes.

New container also pushed to docker as blakeblackshear/frigate:edgetpu if you donā€™t want to build your own.

4 Likes

Awesome, thanks.
Now waiting for my coral to arrive.

Hi, I am testing the version v0.0.1 and I get this message:
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

Any chance to TensorFlow get advance of AVX2 extensions?

You have to build tensorflow from source yourself to get CPU specific features. I didnā€™t see a big performance boost when doing it myself. I have some steps to build it at the bottom of the Readme. Also, Google has good docs on building from source. It takes a long time.

1 Like

Is there a way to build this withut google coralā€¦ DOnā€™t like cloud and google services :frowning:

Coral is neither cloud nor Google service.

If you donā€™t like Google, you might not like to hear that Tensorflow was developed by Google.

Coral is a USB stick with a special processor (yes, developed by google) which vastly accelerates AI.

If you want to use your CPU instead, you can use the old version of the component, but be prepared to dedicate a good i7 machine to the task. Or spend $75 and just get a Coral and use your machine for other things.

My Coral is in the mail ;-).

To be clear, everything is processed locally. Nothing is sent outside of your network. You could build a server with a GPU, but that would be more expensive and still require Google libraries. This just uses the open source libraries and a special processor developed by Google. It doesnā€™t use any ā€œservicesā€ from Google.

1 Like

I got my second Coral in the mail today. I was worried about the ambient temperature recommendations, so I decided to disassemble it. There are 2 chips with thermal pads connecting to the aluminum heatsink. I am sure I have an air gap between the pad and the heatsink now, so I need to replace it or use a different heatsink.

1 Like

Okā€¦my fault ā€¦I didnā€™t know about this device as Iā€™m located in Europe and it has not been announced that well for the moment ā€¦ Iā€™ll look into it and maybe try it :wink:

Thanks so far

I have just ordered a Coral, looking forward to giving this ago.

Out of curiosity, I attempted to use multiple Corals on one machine and it did not work. At this point, I donā€™t have a reason to use more than 1 on the same machine, but the process fails to load the engine if I have more than 1 plugged in at the same time. It may be possible with VMs assuming you can expose one to each VM.

1 Like

I was looking into support for hardware accelerated decoding and realized that OpenCV doesnā€™t use the hardware acceleration built into FFmpeg. I did some basic tests, and it makes a substantial difference. On the laptop where I am running frigate, decoding a 25FPS 1080p stream uses about 25% of a single CPU. Switching FFmpeg to use hardware acceleration drops that to <2%. I can read raw frames form a separate FFmpeg process like this. Once I add support for this, even low powered machines should be able to perform object detection on many cameras. Still not sure the Rpi will be viable since the it doesnā€™t have a USB 3.0 bus.

5 Likes