Local realtime person detection for RTSP cameras

if i add another coral to same server, and setup the docker one more time with different name, how the 2 docker will find the 2 USB corals? i mean how to tell 1 coral per 1 docker instance?

It should be fine as long as the ambient temperature in the room isnā€™t too high. You must have a lot of motion across those cameras. Are there areas you can mask off such as tree tops, etc? I will need to make a few changes in the next version to support linking a docker instance to a specific Coral device. With multiple Coral devices, you can run into power limitations (frigate uses the maximum frequency) and USB speed bottlenecks as described here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#performance-considerations

God Bless you, for me the solution will be to use multiple corals. For USB i can use USB hub, something that works via Ubuntu.

I have an Intel NUC6CAYH which runs Hass.io and am looking to run this using Portainer

Could someone advise which of these 2 Google Coralā€™s would be the best to purchase, i suspect itā€™s the first but just wanted to be sure

https://au.mouser.com/new/google-coral/coral-m2-accelerator-ae/
https://au.mouser.com/new/google-coral/coral-mini-pcie-accelerator/

Are the any issues with passing the internal device through to the frigate docker container?

you need the USB one, this USB Accelerator | Coral but your NUC can be the weakest link because you may suffer enabling hardware acceleration in the Intel CPU. I am actually looking for ways to do it. I have skylake CPU and cant get HW accel working yet

1 Like

Had the same issue, and tried to get round it by adding scikit-build to the Docker file. That didnā€™t work - it just led to cmake errors.

In the end, a bit of googling suggested that it was an out of date pip, so I added:

&& python3.7 -m pip install -U pip \

Immediately before the first pip install line, and that fixed it.

2 Likes

What is the issue the NUC has with hardware acceleration?

Iā€™m running 4 x 4k hikvision cameras using their own NVR if that helps anything

ok, you mentioned that you have NVR, in my case the same server where the dockers are running also the NVR, so in your case there will be no issue with CPU/hw accel. In my case 50% of the CPU/GPU resources are for the NVR function.

not sure anyone use AMD GPUs here but for those i recommend to install the amdgpu pro vulkan drivers and vulkan sdk, it just help to offload CPU. However frigate doesnt need too much CPU luckily.

Iā€™m having a really hard time finding the USB version for sale anywhere in Australia, is there anyway to make the M2 version work with hass.io ?

Really liking this so far. I was using Zoneminder, and the capture process in Zoneminder just canā€™t cope with high res cameras - this does the job, so I can switch over to getting the best out of my cameras.

I have a couple of suggestions (and sorry if theyā€™re in the thread - itā€™s a long one, may have missed a few posts!):

First, it would be useful to switch off cameras dynamically - either completely, or just the saving clips function. Take my back garden camera for example - if Iā€™m out, I want to know if thereā€™s motion, but if Iā€™m home Iā€™ll get constant alerts when Iā€™m, say, mowing the lawn. Itā€™s easy enough to control the alerts from the Home Assistant side, but thereā€™s no need to save clips when Iā€™m home either - in fact, the whole process could be stopped for that camera.

Currently I use two switches per camera in HA to control my Zoneminder to give three possible states:

  • Detecting motion in ZM and sending alerts from HA.
  • Detecting motion in ZM but not sending alerts from HA
  • Not detecting motion in ZM (HA alert toggle irrelevant in that case)

If some of the config could be controlled dynamically, I could do the same thing with this - Iā€™m thinking either a http endpoint, or perhaps an MQTT switch?

Secondly, it would be great to build an interface to the clips - initially perhaps another endpoint that just lists them, but ultimately an interface to view them? If that could be integrated into a sidemenu in HA, even better :slight_smile:

Iā€™m happy to look at the code and see what I can do, but there are some choices to be made on the architectural side first.

good suggestion, i have been running Zoneminder and Shinobi too, but i must say this frigate is the absolute winner just because of the Obj Det function and its even stabil which is not always the case on similar github attempts. I could only suggest 1 but most important thing to improve, the possibility to use more accurate models for detection. So two tree would not be identified as person :slight_smile: But the detection is not bad at all however not perfect.

There is an open issue for turning cameras on and off via MQTT, and I am planning to build a UI to view clips and recorded footage directly in HA. If you would like to contribute, it would probably be best to talk about where the architecture is headed at the moment.

I starting to have problems for home assistant to show/trigger sensor state change according to MQTT messages. It was fine a few days ago and somehow it stopped working. I can verify that the message is received by Mosquitto but the sensor in Home Assistant always show as clear. And the camera entity is still show snapshots from days ago.
1
Here is my sensor/camera setup.

camera:
  - name: Driveway Last Person
    platform: mqtt
    topic: frigate/driveway/person/snapshot

binary_sensor:
  - name: Driveway Person
    platform: mqtt
    state_topic: "frigate/driveway/person"
    device_class: motion
    availability_topic: "frigate/available"

Please help!
Edit: MQTT worked fine for other things.

Sounds good. Iā€™ll check the open issue, and comment if I have any input.

Were you thinking about making the UI something available through HACS? It would be good to keep frigate isolated on the internal network so that it doesnā€™t need its own authentication, and let HA be effectively a proxy by calling the endpoints.

I have no plans to implement my own authentication. My tentative plan is to implement a generic ā€œreverse-proxyā€ component in HA that can be used to proxy to any services and leverage built-in auth. That would expose frigateā€™s API at /api/proxy/frigate... or something similar. Hoping to leverage lovelace with custom cards/panels to build a UI. I have almost all the pieces to do 24/7 recording, browse events, and review footage. I just need to pull it all together.

Sounds great. Another suggestion - would it be possible to have an option to use the substream for analysis but record the main stream?

Doing some testing, ffmpeg CPU is pretty high with an HD camera. With a 1920x1080 stream at 12fps, itā€™s using around 27% CPU. I ran ffmpeg against the same camera to generate clips but without using the ā€œ-f rawvideo -pix fmt rgb24 pipe:ā€ options and CPU use dropped to about 2%.

So Iā€™m thinking maybe define both streams as separate cameras in the config, with a true/false option of ā€œanalyzeā€? Caching could be off for the substreams and then have a parameter on the camera (or maybe zones?) to say which camera to grab clips from when an event occurs.

The substream could then be run at quite a low fps and it would be possible to bump up the fps on the main stream for smoother footage.

I think the substream images would still be fine for the mqtt images that are displayed in HA.

Thatā€™s the long term plan already. I currently run ffmpeg in a separate container for 24/7 recording. I am still thinking through the best approach to combine both of these. I donā€™t necessarily want to disrupt my 24/7 recording when I update frigate, so I may keep it as a separate container and have a shared volume. There is no reason frigate couldnā€™t use the existing recordings I already have to generate clips. I implemented it the way it is now so I could grab the footage as frigate sees it for false positive/negative testing to improve performance.

Hi Blake, it will be great if we can set the clip to save by object, for example, I want to track my car in my garage whether itā€™s there or not, but I donā€™t need save clip, I want to save only if have person there.

Makes sense. Can you open a GH issue?