That must be a bug then. Can you open an issue on github?
Just did. Thx for help.
yep HW accel is a must, everyone ends up here after a while. I also run stock Hikvision NVRs that do HW accel by real hardware not by software like ffmpeg. However there is one super annoying fact with this stock NVRs. The have loud cooling fans, so you cant really install in a living room. One good approach is using a custom build PC with i7 CPU and AMD or NVIDIA GPU in a noise cancellation case. Thats what i have now too. But it needs 300-400Watts of electric power. No issue for me as i live on the sunny south with solar power but i must move towards low powered systems like jetson xavier nx soon or later. Hope Frigate will move this direction too.
I am currently running a Win10 VM with blueiris on it, with no hwaccel. Will try to enable it there first. Will reduce CPU usage and then can run frigate with more CPU
But right now, my avg inference_speed over a day is 60ms. So need to take a different route. Will try LXC after I finish with win10 config.
How do I set the clips volume in docker? I went inside the docker itself and I see the clips in the folder inside the container but I don’t know how to access them from outside.
Anyone:
I seen someone say they sent the clip to their telegram account, how does one access the clips from home assistant? If its not URL based did they setup a custom solution?
Combined with HA new Media Browser, I can view recorded clips whenever I wanna review some footage after got notifications from Telegram.
I’ve 4 cameras savings clips to the clips folder and as the number of clips grow, its getting harder to find the clips easily as Media Browser currently only sorts the files according to ascending alphabetical order.
I wrote a simple script to batch rename all clips to YYYYDDMMHHSS-cameraname-xxxxx.mp4 format. Now if I want to see some latest clips, i just fire up media browser and scroll right to the bottom to find my latest files.
Scirpt:
find * -maxdepth 0 | awk -F- '/.+-[0-9]+\.[0-9]+-.+\..+/{print "mv " $0 " " strftime("%Y%m%d%H%M%S", $2)"-"$1"-"$3}' | bash
Next step is to write a script to move the clips into /YEAR/MONTH/DATE/CameraName folders as archive.
Make sure you have this argument in your docker run command:
-v /local_dir_on_the_docker_machine/clips /clips
I assume you need to set MONITORDIR at the top of the script. Also, do you run this nohup, or in a screen session? Or from cron?
great info. you are able to access GPU inside docker using QEMU & GVT-g?
I have enabled it on my windows QEMU VM, but note sure how to do it as well for QEMU docker. Anything to configure on docker?
Right now, I am unable to create second virtual GPU, need to check hardware bios.
I’m not using Docker with frigate, I’m just running it in the VM. But the Docker config should be the same as for bare metal. I think you should add something like " --device /dev/dri/renderD128:/dev/dri/renderD128" to the Docker command line.
With QEMU I use the QXL video (for console video). If you choose virtio it will create a new render device making the GVT-g not use the default /dev/dri/renderD128 so that is a problem. Also if you use Q35 chipset you will need to configure the PCIe addresses very specifically to allow your GVT-g device on PCIe address 0000:00:02.0.
For the VM, I did not create a GVT-g virtual display in QEMU. I also added “i915.disable_display=1” to the guest VM kernel command line to get the virtual i915 to initialize properly.
Some UEFI do not allow for editing the aperture size unfortunately. But there is a workaround as shown here: https://github.com/intel/gvt-linux/issues/131
I am using q35 on my windows VM. Works overnight so far, but it seems there are some hickups until they move to newer linux kernel. I will let it run for some weeks and see, but the CPU load has greatly reduced on my blueiris windows machine (15% CPU, including Windows & 4 camera 1080p@15fps doing motion detection).
You say you run out of docker: are there any instructions on how to do this?
Sub question (probably more for @blakeblackshear : could I run Frigate on my windows machine (with or without Docker, preferably without)?
Docker is a very thin layer and introduces an insignificant amount of overhead. It’s performance is nearly identical to bare metal. I am not aware of any reason to go through the hassle of trying to maintain it directly in a VM, but you should be able to follow the dockerfile. You will have to rebuild your VM from scratch with each update since many system packages change with each release.
I would be very surprised if it was possible to get it working on windows given how many dynamically linked linux libraries are required in the underlying python packages. I won’t ever add or maintain windows support myself.
thanks, I will try to optimize the perf of my docker QEMU and enable HW acceleration.
Is there any issue with running at avg 60ms inference speed?
I’ve limited to a single camera stream 1080p@5fps until HW acceleration is enabled.
Avg CPU use for this config is 5-10%.
{"coral":{"detection_start":0.0,"fps":5.7,"inference_speed":56.55,"pid":22},"plasma_store_rc":null,"z5":{"camera_fps":5.0,"detection_fps":5.7,"ffmpeg_pid":33,"frame_info":{"detect":1601468995.087345,"process":0.0,"read":1601468995.087345},"pid":35,"process_fps":5.1,"read_start":0.0,"skipped_fps":0.0}}
I’m not a fan of Docker because it seems to have some weird issues. For example it doesn’t support cgroups v2 which means it doesn’t work by default in Fedora 32. I’d rather not have another layer of abstraction with other volumes and images that are hard to access.
I don’t know any official instructions to run out of Docker. I had a bit of trouble because my OS did not support Python3.7 which libedgetpu1 (and therefore frigate) require. But looking at github I see Python3.8 support so maybe it will come to stable soon. https://github.com/google-coral/edgetpu
As mentioned I just used the Dockerfile to see the dependencies.
It is still faster than not using a Coral, but for comparison, both the Raspberry Pi 4 and Atomic Pi get ~15ms which is 4x faster. Your NUC should be ~10ms. There is a bottleneck somewhere in the layers of virtualization for your USB 3 passthrough.
The CPU load won’t increase linearly with each camera you add since some of the processing is shared across cameras. My ffmpeg processes are about 5% of CPU per 1080p 5fps stream with hwaccel using quick sync on my i3 NUC.
I also run 4 1080p 5fps cameras on the J4125 mini pc linked in the Readme. Doesn’t even break a sweat and I run several other services on it too.
The next version of frigate will use python 3.8
Sorry for stupid questions, but can you explain in a few words (or give link) explaining what inference speed is? I understand now it represented somehow the speed of communication between CPU & Coral device? So enabling HW acceleration for H264 decode will have no impact on this.
In that case, I will first look at solving this USB3 forward issue.
Here is what I have so far:
[ 1.629471] usb usb3: We don't know the algorithms for LPM for this host, disabling LPM.
[ 1.630260] usb usb3: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.04
[ 1.630916] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 1.631602] usb usb3: Product: xHCI Host Controller
[ 1.632248] usb usb3: Manufacturer: Linux 5.4.0-48-generic xhci-hcd
[ 1.632942] usb usb3: SerialNumber: 0000:01:1b.0
...
[ 2.621515] usb 3-1: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd
[ 2.646069] usb 3-1: LPM exit latency is zeroed, disabling LPM.
[ 2.646440] usb 3-1: New USB device found, idVendor=18d1, idProduct=9302, bcdDevice= 1.00
[ 2.646462] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
...
[ 65.409349] usb 3-1: reset SuperSpeed Gen 1 USB device number 2 using xhci_hcd
[ 65.429977] usb 3-1: LPM exit latency is zeroed, disabling LPM.
and USB devices as follow:
sleepy@docker:~$ lsusb
Bus 003 Device 002: ID 18d1:9302 Google Inc.
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
It is the amount of time it takes to run object detection on a single 300x300 px image. Said another way, it is the time it takes to execute the tensorflow AI model.
I see that frigate use ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite model. I have some false-positive results sometimes and I’ve read on others threads some people saying that Inception model could improve the objects detection.
So I would have to volume mount the inception_v4_299_quant_edgetpu.tflite file and that’s it ? Or that would be too easy
Thanks !