thanks. Bit of a pain but I guess I’ll end up doing this… what is your setup? Install Debian onto your device & then install HA & Docker, or something like Proxmox & install HA & docker as (separate?) VMs?
Anyone doing detection on stationary objects? I’m trying to count the number of eggs in my chicken coop! I’ve had it working (using Sports Ball as the model seems to work well!!) but it’s unreliable - I THINK this is because, even with stationary objects enabled in Frigate, the object has to move at least once in order to be detected? Is this the case? Any way around it if so? Is it possible to manually “trigger” Frigate to scan an image? Or am I going to have to use something else for this - DOODS?
Pretty sure the specific object doesn’t need to move but movement must first be detected before frigate looks for object. If no changes ever occur then frigate will never look for object.
yeah, you might be right, someone suggested using the time overlay in the top-left to trigger a motion detection somehow? might look into that!
if you own WYZE V3 cameras you can now stream 2 stream ( Low/High ) to Frigate + LAN is also supported.
Garage:
ffmpeg:
input_args: -avoid_negative_ts make_zero
inputs:
- path: rtsp://admin:[email protected]:8555/unicast ( LOW )
roles:
- detect
- rtmp
- path: rtsp://admin:[email protected]:8554/unicast ( HIGH )
roles:
- record
detect:
width: 640
height: 360
fps: 5
record:
enabled: True
retain:
days: 0
mode: active_objects
Anyone here using pi cm4 with pcie coral edge tpu?
Is it confirmed to work now with the latest hass os v8?
Use a microwave radar sensor for presence detection along with frigate event detection.
I have 7 1080p cameras running with my coral. I’m running frigate on an older xeon server (e5-2667v2) and the vm is using most of the 6 cores assigned to it. These CPUs don’t have any hardware acceleration available, so it’s pretty inefficient for decoding. I’m looking to add more cameras (maybe 8 more with some 4k) in the future and was wondering if anyone had a suggestion on how to upgrade my frigate instance.
I’m considering two options.
Option 1: get an Nvidia graphics card (like a Quadro) and use that for the decoding to offset some CPU load. If I do this, what happens when I run out of vram?
Option 2: get a dedicated small form factor computer to run frigate, like an older nuc or the GK41. If I do this, how many streams can I likely support?
I’m open to other options, but those are the only ones I can think of besides throw more cores at the vm.
Coral (option 3) is best but I guess GPU since corals are low stock. That said coral is much cheaper than gpu
noob here. But can Coral good at video decoding at all? I always thought those are for AI processing: object recognizing, pattern recog, person detection, etc.
I am asking because that (video processing for multiple cameras) was what Seechay’s options were about.
The coral is an edge tpu designed for offloading ai/ml tasks (object detection). I think tmjpugh misread my post and thought I wasn’t using a coral and that’s what was causing my cpu usage to be high. I am in fact using a coral for my detection, but my CPUs do not have hardware acceleration available, so it takes extra resources to decode the streams for processing.
Where are we at with Frigate+? I’ve made an account, looks like you can upload images & label them with a very limited selection of tags. Is that it for the moment? Does it actually do anything or is it just demonstrating how it will work in the future? Any way to create new labels and therefore detect new object types (I couldn’t see any way!)
thanks! just a waiting game then
As I mentioned on another thread - I talked to a friend at Google who told me the Coral products have been killed off by Google. Not enough profits. And the lead engineer on the chip left to work for another company.
Is there a plan to use jetson or another co-processor if Coral is dead?
Will see if that actually happens and components sold on the retail side instead of only to OEM’s. I hope it works out.
For those using Reolink 8MP cameras there’s finally a fix to use high resolution in frigate.
I came across this post reolink
Basically if you set the high res stream to 2560*1440 it changes to h264 and frigate works.
Yay!
I’ve just tested it and it really works! I’ve uploaded the latest firmware v3.1.0.956_22041503 to my RLC-810A, updated the Configuration (which did reset to factory defaults), set the main stream to 2560*1440 and it changed the codec from h.265 to h.264.
The field of view is now much broader and also I’m able to set the Fixed Frame Rate to “On, fluency first” and Interframe Space to “1x” which should be better for the motion detection and object detection. Also according to Rob from The Hookup it should fix some night IR ghosting (source: Reolink and Blue Iris Updates: Fixed RTSP, ONVIF, FPS, and iFrame! - YouTube).
Thanks so much for sharing this amazing info Juan!
I’ve set an automation to trigger with frigate events like it was shown in the example in the docs. Too often it happens that there is no image attached in the email it generates. I suspected that it was trying to read the image too quickly, so I added a 30 sec delay between the trigger and the email sending. It didn’t solve it. If I later look the trace it shows everything right: the event id is there, it builds the file paths and filename correctly, the files exist on the disks and can be accessed and look good.
Also, I’m wondering, is there a way to view the mqtt event messages afterwards? I’d like to tweak some settings. If they aren’t saved by Frigate, what would be a smart way to log them? I could save the data every time the notification automation sends the email and possibly learn something. I might also find a way to weed out the false positives.
PS. I’d like to see the event id listed in the event details in the Frigate UI. It would make it easier if you need to go look for the actual files. Also, it’d be nice to have direct access in the UI to the full image incase you have snapshot cropping enabled like do. The file is already there on the disk.