Local realtime person detection for RTSP cameras

I see a lot of people going to a lot of trouble to use integrated Corals, often wondered if I was missing something - I just went for 2 x USB 3.0 connected devices, on the same USB 3.0 hub/port, seems ok to me :slight_smile:

I’m using an old Gigabyte Brix, i7, NUC-like unit.

There are 2 aspects to CPU load:

  • The FFmpeg slicing and dicing of images, this runs on the CPU with hardware acceleration if set/available
  • Detecting objects within the sliced/diced frames, this is what the Coral helps with quite considerably.

have you found a solution for this jerky problem? if have the same problem here with an similar setup

@blakeblackshear

Thank you for a cool and useful product!
I want to make automation based on frigate for opening the gate by identifying my or my wife’s cars. I plan to train my own model. But I haven’t done it yet.
Can you tell me how to properly train my own model? Will this instruction be suitable as a template (Training Custom Object Detector — TensorFlow 2 Object Detection API tutorial documentation)?
Which of the pre-trained models do you recommend using SD Mobile Net v2 320x320, SD Mobile Net V1 FN 640x640, SD MobileNet V2 FPNLite 320x320, SSD MobileNet V2 FPNLite 640x640?
What size should the marked-up images be?

I’m surprised the Inference speed is so good, that’s not good it means I now have another toy on my wish list :slight_smile: Curious, are you running on bare metal or virtualized?

Dedicated box - supervised install.

…. so that when I break the storage, proxmox nodes… I can still I control lights etc :slight_smile:

1 Like

Hi @tmjpugh I don’t see it in the hardware section in the supervisor hardware panel, but this seems to be normal?

Here a tread with all the steps I already tried: Coral TPU not detected by HA blue for Frigate - #13 by dmertens

@Eoin
I’m getting 21 - 25 ms
On an old i3 (4th gen I think) with default ffmpeg options.

1 Like

This may be interesting to you:

1 Like

Oops, I made a serious rookie mistake when I installed and started using my Coral Accelerator. I didn’t read the details closely enough and this morning I realized that there’s a serious performance to be gained with using USB 3.0. I went down and moved it from USB 2.0 to a USB 3.0.

This is data from one Coral USB while detecting on 6/7 cameras. It was the crack of dawn so the sun hadn’t come up yet and there were some cobwebs on one of the cameras. It dropped quickly after the transition because I cleaned off the cobwebs from one of the cameras. The gain in bandwidth is huge!

My interface speed had bottomed out at ~30ms, which resulted in a maximum detection FPS of 30fps. I don’t claim to know how the system works but it makes me wonder if the conversion is 1/0.030s = 30fps. Anyway, after taking advantage of USB 3.0 the detection FPS was running at 70+fps for a bit before I cleaned off the cobwebs.

Anyway, I thought I’d post this here just in case anyone didn’t realize that they should be using USB 3.0 with a USB Coral Accelerator.

1 Like

If I’m not wrong, the default tflite model currently used by frigate is Google’s SSDLite MobileDet detector (you can find it here: Models - Object Detection | Coral).

I followed this tutorial to retrain that object detection model: Google Colaboratory

1 Like

When “edge tpu not detected “ message is received I expect hardware cause. Tpu need good power source so make sure usb 3.0 and make sure PC has stable power as well

I don’t use blue so I not know good troubleshooting for this device but would suspect power then verify tpu appear to OS as expected like /dev/apex0

1 Like

thanks for the advice

Also, if you don’t have a dedicated 3.0 port per device, I had to make sure I was using an externally powered 3.0 Hub, otherwise when the FPS ramped up often one would crash/disappear.

I’m having a hell of a time with Frigate at the moment. It’s up and running perfectly well in a docker with Coral USB. My only issue is that I have a camera at my front door where I park my car. Despite having in place an object mask, Frigate continually detects the car and creates an event for it even if the car hasn’t moved for days. It’s not the end of the world but it’s incredibly irritating particularly as my config file (as seen below) appears to be correct. What am I missing?

FrontDoor:
    ffmpeg:
      inputs:
        - path: rtsp://xxx:[email protected]:554/11
          roles:
            - detect
            - rtmp
            - clips
            - record
    objects:
      track:
        - person
        - car
        - truck
        - bicycle
        - motorcycle
        - dog
        - cat
      filters:
        car:
          mask:
            - 988,793,1474,399,1409,271,891,101,519,184,26,555,109,933
            - 0,42,708,0,701,96,0,330
    width: 1920
    height: 1080
    fps: 3
    clips:
      enabled: True
    record:
      enabled: True
    rtmp:
      enabled: True

I think you need to add the motion mask to do what you are asking:

I’ve already tried that with no joy. And according to the docs, motion and object masks are two different things.

Mine seems to work and I have it in the config like this:

cameras:
  frontyard:
    ffmpeg:
      inputs:
        - path: >-
            rtsp://user:[email protected]:554/cam/realmonitor?channel=1&subtype=2
          roles:
            - detect
    height: 720
    width: 1280
    fps: 5
    motion:
      mask:
      - 231,268,652,210,944

Mask is truncated because it has waaaay more numbers in it, but maybe this formatting will help.
The docs have changed quite a lot as Frigate has grown, but this is what works for me.

Yep - you need to object mask, but make sure that the mask covers the bottom-centre of the bounding box of the car detection area (this is the key).

In my garage, with the camera facing towards the door/street, the car parked front facing towards the camera, I object mask perhaps the bottom 10% of the entire image - so this covers the nose of the car and thus the bottom centre of any bounding box.

This way it detects the car as it enters the garage, as the bottom-centre of the bounding box is detected earlier when coming into the garage, but not the car when its in the garage.

That may have been the issue - ensuring the mask covers the bottom centre of the bounding box. I’d been following the car outline quite tightly and the mask was mainly within the bounding box. I’ve changed the mask and so far, so good… Thanks!

1 Like

You can often just get away with strips for object masks, in my experience, as long as it covers the bottom centre of the bounding box.

For example, green being the object mask

1 Like