Local realtime person detection for RTSP cameras

clean the room? :rofl:

2 Likes

@blakeblackshear very nice work on the docs, thanks. Maybe add how do determine where the USB Coral is plugged with lsusb command.

Apart from this, my frigate docker has been running stable for over a week. Very nice.
I am now moving to optimize things, maybe some stuff I have not tweaked or understood yet.

1/ I have setup zones, and getting notifications I believe even when there is movement outside the zones. How to distinguish between when something happens in the zone or outside the zone? I will help me double check.
2/ Like other posts I have seen since last week, I would like to send the clips when there is notification. How should I link the notification to the clip name?
3/ I am using default 5000 for min_area. What are best practice for min & max area?
4/ When I am getting notification, sometimes the object is quite far from camera, and moving towards camera, so getting bigger. I would like to trig a second notification in the case, to get a better picture. What should I change to achieve this?
5/ When 4/ will be fixed, is there a way to retain image containing the biggest object?
6/ best.jpg contains according to documentation The best snapshot for any object type. What makes it best? Score? Would be good to be able to have endpoint based on object size too.
EDIT 7/ pre_capture is great. I have not seen a post_capture option. Reason for asking: recorded a clip where someone enters a zone, and I don’t see him exiting the zone. Would be great to specify for how long after last detection I would like to capture the clip.
thanks

You may be able to use the min_area filter if no person would actually be that small.

  1. Zones don’t prevent objects from being detected outside of the zone. They just tell you when the objects enter the zone via the MQTT topic. The best.jpg images apply to the entire camera, so it will be updated regardless of the zone. There is already a feature request for a best.jpg endpoint that is specific to the zone.
  2. The id of the end event sent via mqtt will allow you to get the file name. Notifications like this are going to get much simpler once I finish the custom component. I can’t say for sure when I will finish, but I would expect anything you do now will be obsolete before the end of the year.
  3. It totally depends on your camera’s resolution and how far away objects generally are. The number listed next to the score in the mjpeg stream is the calculated area of the object. Try watching it as you walk around and see where you want to cut things off.
  4. Set a lower value for best_image_timeout. I am also working on better logic for determining what is in fact a “better” image. I will be looking at size increases and whether or not the object is partially outside of view.
  5. Use the camera.snapshot service in homeassistant. I will be adding a still image thumbnail of the “best” image for each event that will show up in the media browser in my custom component.
  6. See answer to 4
  7. I think there is already a feature request for post_capture

Very good, will try all this thanks. I am using openhab, not HA (nobody’s perfect :slight_smile:)

Doing the same

I did, yes. 256MB.

Have you switched back and forth between 0.7.1 and 0.7.3 several times to confirm that it consistently fails with 0.7.3?

Yep, checked and triple checked.
0.7.1 has continued to be stable for the last few days. Using the same config file, I just change my docker-compose to use the newer image and it errors straight away.

I’m just in the process of installing RaspberryOS 64bit lite and will try again.

I’ve still got the old SD card so I can go back and test if needed.

Hi Blake, some robbers cut the PoE cable to one of my tower mounted cameras. The idea comes from here. How can we get alert if the video feed or camera is disabled? Can you make a solution that lets say after 30 second if no video signal we should get an alert. Unfortunately disabling security cameras is a pretty classic way for robbers. Thanks

@rpress

Did you get any further with your Dual TPU?

I am having trouble getting either the USB or PCI-E to work on my Main system.
Last time I used Frigate was arounf 0.3 or 0.4? Where it just found the USB Coral.

I know HASSIO can see both the USB and PCIE (well looks like a single TPU at least)

coral-usb
coral-pcie
From the Frigate container I can see USB entries.
frigate-usb

I am using:

detectors:
  coral_usb:
    type: edgetpu
    device: usb
  coral_pci:
    type: edgetpu
    device: pci

I have also tried
usb:2
usb:002
usb:007
usb:002:007

All the combos I have tried results in:

No EdgeTPU detected. Falling back to CPU.
No EdgeTPU detected. Falling back to CPU.

1 Like

rancho, I use home assistant to ping my cameras fixed IP address and if they go offline I get an alert, much better way to do it

1 Like

Have you tried with just the USB coral? I would expect it to work with either usb or usb:0 as the device. The other things you tried are not correct. The suffix number has nothing to do with the output of lsusb or lspci: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api

You could set an alert when camera_fps drops to 0 in homeassistant.

1 Like

I couldn’t get the USB one to work on it’s own before buying the Dual TPU one.

I know it does work as it’s fine on a PI4.

I’ll have to try a different setup at some point.

@bk55 i did it it following (lazy) way:

  1. apt update
  2. apt install vim
  3. do the editing
    It can be done with a fancy one-liner as well, but didn’t bother.

I have a Wyze Cam2 flashed with Dafang Hacks to provide rtsp (which shows OK in VLC). But, the same url causes openCV to fail. Can anyone suggest what is wrong? Here’s the log:

Fontconfig error: Cannot load default config file
On connect called
Starting detection process: 19
ffprobe -v panic -show_error -show_streams -of json “rtsp://192.168.0.13:8554/unicast”
Starting detection process: 20
{‘streams’: [{‘index’: 0, ‘codec_type’: ‘video’, ‘codec_tag_string’: ‘[0][0][0][0]’, ‘codec_tag’: ‘0x0000’, ‘width’: 0, ‘height’: 0, ‘has_b_frames’: 0, ‘level’: -99, ‘r_frame_rate’: ‘90000/1’, ‘avg_frame_rate’: ‘0/0’, ‘time_base’: ‘1/90000’, ‘start_pts’: 0, ‘start_time’: ‘0.000000’, ‘disposition’: {‘default’: 0, ‘dub’: 0, ‘original’: 0, ‘comment’: 0, ‘lyrics’: 0, ‘karaoke’: 0, ‘forced’: 0, ‘hearing_impaired’: 0, ‘visual_impaired’: 0, ‘clean_effects’: 0, ‘attached_pic’: 0, ‘timed_thumbnails’: 0}}]}
[ERROR:0] global /tmp/pip-req-build-a98tlsvg/opencv/modules/videoio/src/cap.cpp (140) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.4.0) /tmp/pip-req-build-a98tlsvg/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can’t find starting number (in the name of file): rtsp://192.168.0.13:8554/unicast in function ‘icvExtractPattern’

Traceback (most recent call last):
File “detect_objects.py”, line 441, in
main()
File “detect_objects.py”, line 235, in main
frame_shape = get_frame_shape(ffmpeg_input)
File “/opt/frigate/frigate/video.py”, line 48, in get_frame_shape
frame_shape = frame.shape
AttributeError: ‘NoneType’ object has no attribute ‘shape’

Thanks for your help.

Try specifying width and height in your config.

Thank you, Blake. I put width and height in the camera config, but the error persists.

If you got the same error, you must have added those options in the wrong place. Can you post your config?