clean the room?
@blakeblackshear very nice work on the docs, thanks. Maybe add how do determine where the USB Coral is plugged with lsusb
command.
Apart from this, my frigate docker has been running stable for over a week. Very nice.
I am now moving to optimize things, maybe some stuff I have not tweaked or understood yet.
1/ I have setup zones, and getting notifications I believe even when there is movement outside the zones. How to distinguish between when something happens in the zone or outside the zone? I will help me double check.
2/ Like other posts I have seen since last week, I would like to send the clips when there is notification. How should I link the notification to the clip name?
3/ I am using default 5000 for min_area. What are best practice for min & max area?
4/ When I am getting notification, sometimes the object is quite far from camera, and moving towards camera, so getting bigger. I would like to trig a second notification in the case, to get a better picture. What should I change to achieve this?
5/ When 4/ will be fixed, is there a way to retain image containing the biggest object?
6/ best.jpg
contains according to documentation The best snapshot for any object type
. What makes it best? Score? Would be good to be able to have endpoint based on object size too.
EDIT 7/ pre_capture
is great. I have not seen a post_capture
option. Reason for asking: recorded a clip where someone enters a zone, and I donât see him exiting the zone. Would be great to specify for how long after last detection I would like to capture the clip.
thanks
You may be able to use the min_area
filter if no person would actually be that small.
- Zones donât prevent objects from being detected outside of the zone. They just tell you when the objects enter the zone via the MQTT topic. The
best.jpg
images apply to the entire camera, so it will be updated regardless of the zone. There is already a feature request for abest.jpg
endpoint that is specific to the zone. - The
id
of the end event sent via mqtt will allow you to get the file name. Notifications like this are going to get much simpler once I finish the custom component. I canât say for sure when I will finish, but I would expect anything you do now will be obsolete before the end of the year. - It totally depends on your cameraâs resolution and how far away objects generally are. The number listed next to the score in the mjpeg stream is the calculated area of the object. Try watching it as you walk around and see where you want to cut things off.
- Set a lower value for
best_image_timeout
. I am also working on better logic for determining what is in fact a âbetterâ image. I will be looking at size increases and whether or not the object is partially outside of view. - Use the
camera.snapshot
service in homeassistant. I will be adding a still image thumbnail of the âbestâ image for each event that will show up in the media browser in my custom component. - See answer to 4
- I think there is already a feature request for
post_capture
Very good, will try all this thanks. I am using openhab, not HA (nobodyâs perfect )
Doing the same
I did, yes. 256MB.
Have you switched back and forth between 0.7.1 and 0.7.3 several times to confirm that it consistently fails with 0.7.3?
Yep, checked and triple checked.
0.7.1 has continued to be stable for the last few days. Using the same config file, I just change my docker-compose to use the newer image and it errors straight away.
Iâm just in the process of installing RaspberryOS 64bit lite and will try again.
Iâve still got the old SD card so I can go back and test if needed.
Hi Blake, some robbers cut the PoE cable to one of my tower mounted cameras. The idea comes from here. How can we get alert if the video feed or camera is disabled? Can you make a solution that lets say after 30 second if no video signal we should get an alert. Unfortunately disabling security cameras is a pretty classic way for robbers. Thanks
Did you get any further with your Dual TPU?
I am having trouble getting either the USB or PCI-E to work on my Main system.
Last time I used Frigate was arounf 0.3 or 0.4? Where it just found the USB Coral.
I know HASSIO can see both the USB and PCIE (well looks like a single TPU at least)
From the Frigate container I can see USB entries.
I am using:
detectors:
coral_usb:
type: edgetpu
device: usb
coral_pci:
type: edgetpu
device: pci
I have also tried
usb:2
usb:002
usb:007
usb:002:007
All the combos I have tried results in:
No EdgeTPU detected. Falling back to CPU.
No EdgeTPU detected. Falling back to CPU.
rancho, I use home assistant to ping my cameras fixed IP address and if they go offline I get an alert, much better way to do it
Have you tried with just the USB coral? I would expect it to work with either usb
or usb:0
as the device. The other things you tried are not correct. The suffix number has nothing to do with the output of lsusb
or lspci
: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
You could set an alert when camera_fps
drops to 0 in homeassistant.
I couldnât get the USB one to work on itâs own before buying the Dual TPU one.
I know it does work as itâs fine on a PI4.
Iâll have to try a different setup at some point.
@bk55 i did it it following (lazy) way:
- apt update
- apt install vim
- do the editing
It can be done with a fancy one-liner as well, but didnât bother.
I have a Wyze Cam2 flashed with Dafang Hacks to provide rtsp (which shows OK in VLC). But, the same url causes openCV to fail. Can anyone suggest what is wrong? Hereâs the log:
Fontconfig error: Cannot load default config file
On connect called
Starting detection process: 19
ffprobe -v panic -show_error -show_streams -of json ârtsp://192.168.0.13:8554/unicastâ
Starting detection process: 20
{âstreamsâ: [{âindexâ: 0, âcodec_typeâ: âvideoâ, âcodec_tag_stringâ: â[0][0][0][0]â, âcodec_tagâ: â0x0000â, âwidthâ: 0, âheightâ: 0, âhas_b_framesâ: 0, âlevelâ: -99, âr_frame_rateâ: â90000/1â, âavg_frame_rateâ: â0/0â, âtime_baseâ: â1/90000â, âstart_ptsâ: 0, âstart_timeâ: â0.000000â, âdispositionâ: {âdefaultâ: 0, âdubâ: 0, âoriginalâ: 0, âcommentâ: 0, âlyricsâ: 0, âkaraokeâ: 0, âforcedâ: 0, âhearing_impairedâ: 0, âvisual_impairedâ: 0, âclean_effectsâ: 0, âattached_picâ: 0, âtimed_thumbnailsâ: 0}}]}
[ERROR:0] global /tmp/pip-req-build-a98tlsvg/opencv/modules/videoio/src/cap.cpp (140) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.4.0) /tmp/pip-req-build-a98tlsvg/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: canât find starting number (in the name of file): rtsp://192.168.0.13:8554/unicast in function âicvExtractPatternâ
Traceback (most recent call last):
File âdetect_objects.pyâ, line 441, in
main()
File âdetect_objects.pyâ, line 235, in main
frame_shape = get_frame_shape(ffmpeg_input)
File â/opt/frigate/frigate/video.pyâ, line 48, in get_frame_shape
frame_shape = frame.shape
AttributeError: âNoneTypeâ object has no attribute âshapeâ
Thanks for your help.
Try specifying width and height in your config.
Thank you, Blake. I put width and height in the camera config, but the error persists.
If you got the same error, you must have added those options in the wrong place. Can you post your config?