Local realtime person detection for RTSP cameras

@blakeblackshear I have the following in my config, but still, no car or truck detected.

objects:
  track:
    - person
    - car
    - truck
    - cat
    - dog

Itā€™s either something else in your config or it doesnā€™t look enough like a car/truck for to be detected by the model. I am definitely detecting cars with the same container version. You are welcome to send me a video clip and I can check why frigate doesnā€™t see it.

Hi, thanks for this.
I am still facing some issues while trying to start the docker image. Some of them come from config file it seemsā€¦
SOLVED: created a new camera user without ā€˜$ā€™ in password, and it works fine now.

Now facing this issue:
Is there a way to get more details in the logs?

2020-09-25T18:49:00.483479767Z TPU found
2020-09-25T18:49:47.717636538Z Detection appears to be stuck. Restarting detection process
2020-09-25T18:49:47.717676707Z Waiting for detection process to exit gracefully...
2020-09-25T18:50:17.748128398Z Detection process didnt exit. Force killing...
2020-09-25T18:50:17.761547490Z Starting detection process: 295
2020-09-25T18:50:17.762012168Z Attempting to load TPU as usb
2020-09-25T18:50:20.579762657Z TPU found
2020-09-25T18:52:57.901249695Z Detection appears to be stuck. Restarting detection process
2020-09-25T18:52:57.901286145Z Waiting for detection process to exit gracefully...
2020-09-25T18:53:27.919253365Z Detection process didnt exit. Force killing...
2020-09-25T18:53:27.928255662Z Starting detection process: 342
2020-09-25T18:53:27.928722227Z Attempting to load TPU as usb
2020-09-25T18:53:30.761989980Z TPU found
2020-09-25T18:54:48.001106861Z Detection appears to be stuck. Restarting detection process
2020-09-25T18:54:48.001284937Z Waiting for detection process to exit gracefully...
2020-09-25T18:55:18.023031990Z Detection process didnt exit. Force killing...
2020-09-25T18:55:18.033122729Z Starting detection process: 374
2020-09-25T18:55:18.033427675Z Attempting to load TPU as usb
2020-09-25T18:55:20.844395666Z TPU found
2020-09-25T18:55:58.066912653Z Detection appears to be stuck. Restarting detection process
2020-09-25T18:55:58.067045112Z Waiting for detection process to exit gracefully...

/debug/stats on the webserver shows this:

{"back":{"camera_fps":15.1,"detection_fps":0.0,"ffmpeg_pid":30,"frame_info":{"detect":1601061887.556944,"process":0.0,"read":1601061927.216278},"pid":32,"process_fps":0.1,"read_start":0.0,"skipped_fps":15.1},"coral":{"detection_start":0.0,"fps":0.0,"inference_speed":22.42,"pid":875},"plasma_store_rc":null}

@blakeblackshear Iā€™ll try to send you a video clip.
Does using a 360Ā° feed would impact the detection ?

There isnā€™t any additional logging beyond that. The log indicates the container is seeing your Coral device, but it isnā€™t able to communicate with it. What is your host machine setup?

As long as the image isnā€™t heavily distorted, it should be fine.

hi, NUC running proxmox. Coral TPU connected to the VM. Here is what docker host is seeing:

sleepy@docker:/opt/frigate/config$ lsusb
Bus 003 Device 002: ID 18d1:9302 Google Inc. 
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

Is there a way to tell frigate which USB device to use? or maybe a problem with my proxmox configā€¦ I have added /debug/stats to previous post.

From the readme

Users have reported varying success in getting frigate to run in a VM. In some cases, the virtualization layer introduces a significant delay in communication with the Coral. If running virtualized in Proxmox, pass the USB card/interface to the virtual machine not the USB ID for faster inference speed.

Yes, right, I have found similar information here (although I am using VM and not LXC): Local realtime person detection for RTSP cameras

thanks a lot for the hint.

the tricky part is that is is not easily doable in Proxmox GUI, here is what I have done:
https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines

Here is what I have now in debug stats, seems better:

{"back":{"camera_fps":15.0,"detection_fps":0.0,"ffmpeg_pid":30,"frame_info":{"detect":1601066146.079502,"process":0.0,"read":1601066146.079502},"pid":32,"process_fps":15.1,"read_start":0.0,"skipped_fps":0.0},"coral":{"detection_start":0.0,"fps":0.0,"inference_speed":59.53,"pid":22},"plasma_store_rc":null}

The inference speed still seems slow. My NUC7 i3 has inference times under 10ms. 59ms is still fast relative to the CPU.

Iā€™d recommend you get a PCIe Coral. I didnā€™t have any luck passing through the USB device, and I could not pass through the USB controller. I now have the A+E Key Coral and I see inference times around 7 ms.

I have somewhat successfully passed USB Coral to Proxmox VM but i went the LXC route to get hardware acceleration.

My findings for using while using Proxmox VM + Coral USB:

Setup: i5-4th gen mobile CPU Ultrabook Laptop with 2 USB3.0 port running on same root hub. I plugged in Coral USB3.0 and a USB Gigabit Network Adapter
Observation: Frigate able to start with 20-30ms Inference speed. Frigate will crash after running for few hours due to Coral USB totally disappear from the machine, have to plug in and out to get it back. Likely this is due to insufficient power supplied to coral either through power limitation of this ultrabook or too much power draw by both the USB NIC and Coral.

Setup: i5-2410m Laptop with USB2.0 port + Coral USB in Proxmox VM
Observation: I am able to run this setup with inference speed of approx 120ms. I ran it for few days without issues but 120ms is not using coral to its potential. Conclusion to this setup, must use USB3.0 for coral.

In short, take the LXC route and run frigate inside the container. You can enable hardware acceleration with LXC setup which saves alot of CPU cycles. I did it with another i5-6500 cpu setup and I got 8ms inference speed.

Side note:
With the above setup, I enabled hassio VM samba addon then in my LXC container I use fstab to mount /opt/frigate/clips to hassio VM /media/clips to take advantage of HA new media browser to view the recordings.

2 Likes

doing Deep Learning (eg Object Detection) and even video processing is a bad idea on laptops in general, i have Dells and HPs laptops. 1 years old i7 CPU but i gave up on it, these are mobile CPU, no real GPU, weak juice in USB ports, fan noise and high temp when you load it. I use a few years old ATX case, normal desktop PC with i7 CPU, 32 GB RAM, default USB port, even i have a big ass GPU in too. Ubuntu 18.04 and docker. Good rig for frigate and even soft NVR function. But even if i recommend this way, i am thinking of the next stage. I would love to reduce the entire system size as well es the power consumption and want to use hardware acceleration for both object detection and NVR preview. So i am digging the net as so far all route leading me towards jetson xavier nx. I hope Blake will consider it as this would be a best rig for it. One more thing, probably on jetson xavier nx we could use better models too, less failed or miss detection.

That did the trick, I only just now realized that the number in addition to the % is the ā€œsizeā€ this was helpful to set a new value.
Thanks for the amazing piece of software!

Thanks for the comments. If I understand correctly, the inference speeds shows communication with TPU is somewhat slow? Maybe my config of frigate is not good, have not tweaked anything yet. Using 1080p stream at 15fps.

Is there any use of setting the HW accel for FFMPEG correctly in config if using TPU? If yes, how can I get this from my machine?

I have NUC7i5BNH (i5-7260U CPU @ 2.20GHz) as host running proxmox with 32G RAM.
frigate runs on docker, installed on Debian VM. Has not crashed since yesterday, did not try detection a lot though (itā€™s raining :smile:)

EDIT: rebuild the docker image to add /clips volume.
Now getting this performance, which seems better? (single cam configured for now)

{"back":{"camera_fps":14.9,"detection_fps":0.0,"ffmpeg_pid":30,"frame_info":{"detect":1601153532.036601,"process":0.0,"read":1601153532.036601},"pid":32,"process_fps":15.0,"read_start":0.0,"skipped_fps":0.0},"coral":{"detection_start":0.0,"fps":0.0,"inference_speed":10.0,"pid":22},"plasma_store_rc":null}

Everyone,
Thanks for all the help so far! I have 6 cameras running and sending detected people to my telegram account.
First Question:
How do I view the clips? I have them enabled in the config file. but when I look at the logs for frigate I see
ā€œmoov atom not foundā€
Invalid data found when processing input
bad file porch-2349878123.mp4

Second question:
Where do I put the mask files? Do I place it in the same location as the frigate config?

Thanks for all the help so far, yall have been great.

your rig should be fine but even a fully loaded NUC7i5BNH with coral is a dead end if you run 7+ cams unless you go down to 2 fps per cam. The price of a fully loaded NUC with coral about the same as a Jetson Xavier NX and you compare a mouse with an elephant as with Jetson its all about the HW accelerations and detection models. Asians have already started building NVRs with it

This is true, but I did not buy the NUC for this purpose, I am running many other things on it, that Jetson could not.

I also have the same questions that @NotSoAlien, facing similar issue that First question.
EDIT: my clips work fine now. @NotSoAlien do you have the clips volume set in docker?

@blakeblackshear it would be good if we could access the clips directly from the webserver (for example at http://server:5000/clips/)

The error you are seeing in that first question is expected in some situations. The ffmpeg process saving the cache will leave an incomplete clip in cache on exit. Frigate will determine that the file is corrupted and clean it up. If you are getting clips in the /clips folder, you can disregard.

The mask files go in the same directory as the config.

1 Like

Frigate can handle more than 7 cameras just fine. In NVIDIAā€™s own benchmarks the Coral is faster than the GPU on a Jetson for every tensorflow model where it is compared. Also, Intel QuickSync hwaccel is very efficient for decoding video. I am running 7 1080p cameras at 5fps on a NUC7i3BNH and frigate typically consumes about 80-90% of a single core for everything. Each ffmpeg process takes ~6-7% of a CPU core to decode the stream. I should be able to handle quite a few more cameras on my NUC given I have 4 cores available.