Local realtime person detection for RTSP cameras

nano is worse than Coral, 100% agree, but xavier nx is far better than any of these https://developer.nvidia.com/blog/jetson-xavier-nx-the-worlds-smallest-ai-supercomputer/ You still wont have enough resources for a NVR function. Thats impossible to do on CPU. So if you want NVR function recording 7+ HD streams you still need to buy a NVR or use another PC. Thats why i am telling xavier nx is the only thing that could do all.

Can you add typical Coral inference_speed on github page https://github.com/blakeblackshear/frigate ?
I am getting between 10ms and 80msā€¦

Also, do I need to add Intel HW parameters if I use coral? I assume not.

Inference times are already listed in the hardware section. The Coral is hardware acceleration for running the AI models. It has nothing to do with decoding the video stream. For that, you want the hwaccel params for ffmpeg.

1 Like

when can we use the ffmpeg 4.3 with your frigate? FFmpeg 4.3 Released With AMD AMF Encoding, Vulkan Support, AV1 Encode - Phoronix

i tested it in Ubuntu 18.04 from PPA and works but it did not work with Chromium browser Hikvision webplugin. That was the only but main reason i went back to stock ffmpeg

Ok, sorry seems I just donā€™t see it (I only see the ā€œRecommended hardwareā€ section, and then in the ā€œDebug infoā€ some mention about 10ms.

For ffmpeg options, it seems whatever I add in there ffmpeg just doesnt like
I tried adding this:

ffmpeg:
  hwaccel_args:
    - -hwaccel
    - vaapi
    - -hwaccel_device
    - /dev/dri/renderD128
    - -hwaccel_output_format
    - yuv420p

but I am then facing ffmpeg broken frames everytime. Probably a VM issue. Iā€™m looking at it.

1 Like

are you sure your CPU support it https://streambuilder.pro/how-to-check-intel-quick-sync-video-support
mine is i7 but doesnt support Intel Quick Sync look like. So I am waiting for ffmpeg4.3 that should enable my GPU via Vulkan. Otherwise running on CPU is also not too bad, Blake did a great job but i would use the CPU for other stuff

Yes, mine is i5-7260U, it does support quick sync => https://ark.intel.com/content/www/us/en/ark/products/97539/intel-core-i5-7260u-processor-4m-cache-up-to-3-40-ghz.html

However, I need to allow the VM to access the GPU, and this seems rather complicated on existing VM at least. Need to enable UEFI Bios, etcā€¦ (as described here: https://lunar.computer/posts/gpu-passthrough-proxmox-60/). Been unlucky so far to boot the same VM when changing BIOS from Seabios.

Running more than a single camera without HW accel seems to kill things.
@NotSoAlien: you have 8ms with LXC? and this is realiable? How many cameras?
I may try this route, seems easier to enable GPU passthrough using LXC.

Iā€™m using GVT-g with QEMU. Itā€™s not quite as fast as bare metal (or GVT-d / VFIO passthrough) but it does work pretty well.

Hi @blakeblackshear, big fan of frigate, using it for a long time.

I am using 0.6.0-rc3 and I just want to ask if the zones can overlap?

My issue is with a camera with two overlapping zones where I have setup different filters (threshold, min_area, max_area) for ā€œpersonā€ object in each zone but whenever the ā€œpersonā€ object is in the overlapped part of the image, I am getting MQTT ā€œONā€ on both zones.

I am just not sure if this is by design or if I am doing something wrong?

I am currently using two separate streams from the same camera, each with different mask to achieve what I want but I would like to avoid streaming the same stream twice :slight_smile:

My config example (for my use case, the ā€œmin_areaā€ is critical so I dont detect ā€œsmallā€ persons in one zone):

objects:
  track:
    - person
  filters:
    person:
      min_area: 5000
      max_area: 1000000
      min_score: 0.5
      threshold: 0.5

  ipcam13:
    ffmpeg:
      input: rtsp://blabla
    #mask: ipcam13_stairs_mask.bmp
    zones:
      outside_yard:
        coordinates:
          - 0,0
          - 0,1078
          - 907,1078
          - 909,622
          - 1621,621
          - 1621,565
          - 1786,565
          - 1784,490
          - 1918,493
          - 1918,3
        filters:
          person:
            min_area: 5000
            max_area: 700000
            threshold: 0.75
      outside_stairs:
        coordinates:
          - 938,1078
          - 1918,1078
          - 1918,3
          - 1108,0
          - 1106,110
          - 940,112
        filters:
          person:
            min_area: 180000
            max_area: 700000
            threshold: 0.75
    fps: 4
    snapshots:
      show_timestamp: True
      draw_zones: True
    objects:
      track:
        - person
      filters:
        person:
          min_area: 5000
          max_area: 700000
          min_score: 0.5
          threshold: 0.75

Thx

1 Like

Sorry for the late reply. Been busy this weekend. I have a proxmox server, but I do not run docker and frigate on it because of the reasons you listed above. Its way to hard to run a VM on a VM. Plus my hardware doesnt support it. I am running AMD on my proxmox, so its not even possible per the official documentation on VM passthrough.

The route I went was to use my proxmox for pfsense, pihole, and some other small things. I use a dedicated laptop for ā€œfrigateā€ I have the laptop running ubuntu desktop, with docker installed. I also installed home assistant on it because the raspi felt to slow for my liking. I currently have 6 cameras running but im about to add the 7th one to the config file all at 10fps. I havenā€™t noticed any issues with anything. The detection is amazing. I havenā€™t missed a single car or person yet. I have had two false alerts, which is normal because even when I was running the ā€œbest ssd(single shot detector)ā€ out right now it did the same thing with those stupid bugs that fly into the IR lights.

I have 10ms interface time directly into the usb port of the usb3.0 laptop. I wish you well on the proxmox adventure but I gave up on mine and went to dedicated hardware.

1 Like

So the outside_stairs zone is showing ON when a person is in the overlapping area even though they are smaller than the min_area filter?

Yes, exactly.

Also, if I am watching the stream, I can see both zones lines increasing thickness when the person is in the mentioned overlapping area.

That must be a bug then. Can you open an issue on github?

Just did. Thx for help.

yep HW accel is a must, everyone ends up here after a while. I also run stock Hikvision NVRs that do HW accel by real hardware not by software like ffmpeg. However there is one super annoying fact with this stock NVRs. The have loud cooling fans, so you cant really install in a living room. One good approach is using a custom build PC with i7 CPU and AMD or NVIDIA GPU in a noise cancellation case. Thats what i have now too. But it needs 300-400Watts of electric power. No issue for me as i live on the sunny south with solar power but i must move towards low powered systems like jetson xavier nx soon or later. Hope Frigate will move this direction too.

I am currently running a Win10 VM with blueiris on it, with no hwaccel. Will try to enable it there first. Will reduce CPU usage and then can run frigate with more CPU :slight_smile:
But right now, my avg inference_speed over a day is 60ms. So need to take a different route. Will try LXC after I finish with win10 config.

How do I set the clips volume in docker? I went inside the docker itself and I see the clips in the folder inside the container but I donā€™t know how to access them from outside.

Anyone:
I seen someone say they sent the clip to their telegram account, how does one access the clips from home assistant? If its not URL based did they setup a custom solution?

Combined with HA new Media Browser, I can view recorded clips whenever I wanna review some footage after got notifications from Telegram.

Iā€™ve 4 cameras savings clips to the clips folder and as the number of clips grow, its getting harder to find the clips easily as Media Browser currently only sorts the files according to ascending alphabetical order.

I wrote a simple script to batch rename all clips to YYYYDDMMHHSS-cameraname-xxxxx.mp4 format. Now if I want to see some latest clips, i just fire up media browser and scroll right to the bottom to find my latest files.

Scirpt:

find * -maxdepth 0 | awk -F- '/.+-[0-9]+\.[0-9]+-.+\..+/{print "mv " $0 " " strftime("%Y%m%d%H%M%S", $2)"-"$1"-"$3}' | bash

Next step is to write a script to move the clips into /YEAR/MONTH/DATE/CameraName folders as archive.

1 Like

This might be helpful: https://github.com/blakeblackshear/nvr-manager

1 Like

Make sure you have this argument in your docker run command:

-v /local_dir_on_the_docker_machine/clips /clips
1 Like