What you are asking for doesn’t exist currently yet you keep asking if it exists.
Put in a feature request on GH.
What you are asking for doesn’t exist currently yet you keep asking if it exists.
Put in a feature request on GH.
This is one of the ways that DOODS and the deepstack integrations/addons work.
I have used both and used the integration for my camera to detect motion and then call the image processing service which has the detection run on the current frame in the camera.
I have spent the day tweaking the input_args to no avail. Is there an explainer somewhere that I could use to fine tune my efforts? The documentation doesn’t go into detail about the input_args.
I seem to have a “dead zone” with my person detection. If anyone has some tips that could help I would appreciate it!
Camera: [IPC-T5442TM-AS-LED]
Here is the config for the camera
##################################
Driveway:
ffmpeg:
inputs:
- path: >-
rtsp://USER:PASSWORD@IPADDRESS/cam/realmonitor?channel=1&subtype=1
roles:
- rtmp
- path: >-
rtsp://USER:PASSWORD@IPADDRESS:554/cam/realmonitor?channel=1&subtype=0
roles:
- clips
- detect
#Driveway zone
zones:
CarsandYard:
coordinates: 2401,1520,2372,1391,2272,858,2120,791,1991,732,1277,753,691,717,234,782,0,835,0,1520
clips:
enabled: true
pre_capture: 10
post_capture: 10
objects:
- person
- car
objects:
track:
- person
- car
filters:
car:
min_area: 5000
max_area: 100000
min_score: 0.35
threshold: 0.74
mask:
- 802,623,609,920,555,1101,635,1249,851,1310,1119,1296,1296,837,1348,658,1303,463,988,444
- 1594,463,1820,470,1844,513,1905,609,1999,1112,2006,1413,1834,1520,1331,1493,1228,1380,1425,466
- 1738,0,1766,158,1543,165,1418,167,1413,0
- 837,346,1402,282,1343,190,503,235,461,369
- 2316,0,2648,278,2305,268,2039,38
- 2563,58,2565,125,1784,118,1759,39
person:
min_area: 5000
max_area: 100000
min_score: 0.25
threshold: 0.72
mask:
- 908,362,927,501,1051,440,1051,350
#Driveway Detect Resolution
width: 2688
height: 1520
fps: 7
#Driveway advanced motion settings
motion:
threshold: 16
contour_area: 90
delta_alpha: 0.25
frame_alpha: 0.20
frame_height: 300
#Driveway motion mask
mask:
- 0,0,1030,0,1131,200,626,510,482,593,299,651,0,741
- 2688,764,2589,800,2465,922,2352,1225,2326,1520,2688,1520
- 2563,58,2565,125,1784,118,1759,39
Not sure about the time, it works fine on mine. I’m in GMT and running on HassOS.
As for false positives, I’ve reduced mine by playing around with the object max and min size’s. My partner didn’t take too well to being classified as 81% dog either
Has anyone had any issues with power or heat and the Google Coral USB on RPi4? I’ve got my frigate/hassos instance running with 6 cameras in an actively cooled Argon M.2 Case. I went with a WD Blue M.2 SSD and booting HassOS on it as well as storing clips. It was running fine for a few days then the SSD decided to crap out when Frigate launched. More context here and here
At the moment I think it’s related to power and heat. I moved the Coral USB to the GPIO cover where the fan exhaust is a few days ago. Running at 70-90% CPU constantly and being overclocked is generating some heat! Looking at the Coral datasheet it suggests it doesn’t like ambient temps above 25c! So even though my Coral was relatively cool to the touch, a bit of heat made the current usage jump or it to cause errors on the USB3 bus maybe.
I’d be interested to know at which speed the TPU is operating on in Frigate as according to the datasheet this can affect power consumption:
When you first set up the USB Accelerator, you can select whether the device operates at the maximum clock frequency or
the reduced clock frequency. The maximum frequency is twice the reduced setting, which increases the inferencing speed
but also increases power consumption.
To change the clock frequency at which the device operates, simply install the alternative runtime, as described in the
instructions for how to install the Edge TPU runtime.
Why am I being ignored on purpose?
Do I ask the wrong questions?
In a wrong way?
On the wrong location?
Please let me know and I will change that.
Thanks in advance…
Any particular reason you are running with threshold
, contour_area
, delta_alpha
, etc…? Have you tried just removing all of those and running with the default settings? Its also hard to tell since you aren’t showing where the motion and object mask are located on the screen.
Stupid question. Is that a normal behaviour that when i open the cam via frigate component, the stream is fluent but if i use the created entity camera it stutters at 5 frames /s?
People aren’t ignoring you intentionally so much they just may not know the answer and hope someone else can help. It’s not an error I’m familiar with, and reading online, a Fatal Python error: Fatal Python error: Bus error
sounds like something that may be specific to your hardware; disk/swap space is low/out, memory is low, kernel issue/bug etc. Coming from personal experience with some issues I initially had with Frigate, they weren’t Frigate related but either my own hardware issues, or problems with network communication (in my specific case) to the cameras, which makes it harder for others to solve the problem.
No it’s not normal mine play fine via HA. Maybe it’s bandwidth issue with your HA instance? I have pre-load stream unticked on all of the cameras too.
To elaborate on my issue, I’m now thinking heat isn’t the cause but power. The RPi can only deliver a max of 1.5A over all the USB and the SSD has an average peak of 667mA. The Google Coral at Max speed has a peak of 900mA. I’d still like to know which runtime is being used libedgetpu1-std or libedgetpu1-max and if this can be changed in HassOS. It doesn’t mention how much peak is under std but I would have thought saving more than the 167mA I’m over…
Definitely get a powered USB hub. That will solve your power issue. Those Pi’s are really underpowered on the USB bus.
I guess I have stream activated but with this activated I’ve delay in video
I had a very similar problem and used the following solutions.
Good Luck!
I have tried removing those but it is a bit worse at night time detection without them.
Also would “car” object masks affect “person” detection?
Here are the motion masks:
Is mjpeg faster and less CPU hogging than h264?
Would it be better to use mjpeg for Frigate?
Thinking about using Zoneminders mjpeg streams as I already using ZM for the recording.
Sorry for asking this 2nd time… Two questions:
It is anyhow possible that Frigate is saving clips when detecting movement (above adjusted trigger) and then try to make object detection. But movement clip has been saved anyway…
second… I’d like to see even bounding box and/or motion box saved also to the clip (as an option).
I’ve found a Kingston SSD which has a whole watt less of peak power so it should bring me under the max load. I want to avoid hubs and anything hanging off the case as much as possible. Amazingly, I’ve managed to fit the Coral Inside the Argon M.2 case, I’m just waiting for a cable to connect it up. Will look nice and tidy as long as the Kingston SSD delivers!
i get this error on my reolink stream:
frigate.video INFO : tuinhuis: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : tuinhuis: ffmpeg process is not running. exiting capture thread...
my config:
cameras:
# Name of your camera
garage: ##### HIKVISION #####
ffmpeg:
inputs:
- path: rtsp://****:***@***********:******/Streaming/Channels/1
roles:
- detect
- rtmp
width: 1280
height: 720
fps: 5
# Name of your camera
tuinhuis: ##### REOLINK 520 ######
ffmpeg:
inputs:
- path: rtsp://********:*******@*******:***/h264Preview_01_main
roles:
- detect
- rtmp
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
- nobuffer
- -flags
- low_delay
- -strict
- experimental
- -fflags
- +genpts+discardcorrupt
- -rw_timeout
- '5000000'
- -use_wallclock_as_timestamps
- '1'
width: 1280
height: 720
fps: 5
without the inut args i get a broken video but is streaming from it with the args its just a green screen