You have to define the mask also under filter
and for each object. See post here.
There are two types of masks available:
-
Motion masks: Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with
Motion Boxes
enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed withMotion Boxes
enabled again. -
Object filter masks: Object filter masks are used to filter out false positives for a given object type. These should be used to filter any areas where it is not possible for an object of that type to be. The bottom center of the detected objectās bounding box is evaluated against the mask. If it is in a masked area, it is assumed to be a false positive. For example, you may want to mask out rooftops, walls, the sky, treetops for people. For cars, masking locations other than the street or your driveway will tell frigate that anything in your yard is a false positive.
These are different because the technical implementation of these masks is completely different. One blacks out the image for motion detection and the other is used to evaluate whether or not a given point is within the polygon.
@ebendl I also have this small live streams in HA. As the HA stream componet is not an alternative for me. Needs ~ 5 sec to start the live stream in lovelace which is not acceptable.
I found a workaround for this. You need to edit the custom_components/frigate/camera.py line 51:
from
self._latest_url = urllib.parse.urljoin(self._host, f"/api/{self._name}/latest.jpg?h=277")
to:
self._latest_url = urllib.parse.urljoin(self._host, f"/api/{self._name}/latest.jpg")
Then you have the original rtmp resolution in HA if you would view the stream from frigate.
Does it work for you?
I tried this, it is working with manual run. But in case of frigate, if I add -c:v h264_qsv
to the input, I see following errors
ffmpeg.garage.detect ERROR : Input #0, rtsp, from 'rtsp://192.168.0.129:554/cam/ch0_0.h264':
ffmpeg.garage.detect ERROR : Metadata:
ffmpeg.garage.detect ERROR : title : Session streamed by "ISD RTSP Server"
ffmpeg.garage.detect ERROR : Duration: N/A, start: 1611327783.865822, bitrate: N/A
ffmpeg.garage.detect ERROR : Stream #0:0: Video: h264, yuvj420p(pc, bt709, progressive), 2560x1440 [SAR 1:1 DAR 16:9], 10 fps, 9.67 tbr, 90k tbn, 20 tbc
ffmpeg.garage.detect ERROR : Stream #0:1: Audio: aac, 44100 Hz, mono, fltp
ffmpeg.garage.detect ERROR : Stream mapping:
ffmpeg.garage.detect ERROR : Stream #0:0 -> #0:0 (h264 (h264_qsv) -> rawvideo (native))
ffmpeg.garage.detect ERROR : Press [q] to stop, [?] for help
ffmpeg.garage.detect ERROR : Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
ffmpeg.garage.detect ERROR : Error reinitializing filters!
ffmpeg.garage.detect ERROR : Failed to inject frame into filter network: Function not implemented
ffmpeg.garage.detect ERROR : Error while processing the decoded data for stream #0:0
ffmpeg.garage.detect ERROR : Conversion failed!
Then I added -c:v h264_qsv
to the output
ffmpeg.garage.detect ERROR : Input #0, rtsp, from 'rtsp://192.168.0.129:554/cam/ch0_0.h264':
ffmpeg.garage.detect ERROR : Metadata:
ffmpeg.garage.detect ERROR : title : Session streamed by "ISD RTSP Server"
ffmpeg.garage.detect ERROR : Duration: N/A, start: 1611328556.271633, bitrate: N/A
ffmpeg.garage.detect ERROR : Stream #0:0: Video: h264, yuvj420p(pc, bt709, progressive), 2560x1440 [SAR 1:1 DAR 16:9], 10 fps, 10 tbr, 90k tbn, 20 tbc
ffmpeg.garage.detect ERROR : Stream #0:1: Audio: aac, 44100 Hz, mono, fltp
ffmpeg.garage.detect ERROR : Stream mapping:
ffmpeg.garage.detect ERROR : Stream #0:0 -> #0:0 (h264 (h264_qsv) -> h264 (h264_qsv))
ffmpeg.garage.detect ERROR : Press [q] to stop, [?] for help
ffmpeg.garage.detect ERROR : Incompatible pixel format 'yuv420p' for codec 'h264_qsv', auto-selecting format 'nv12'
ffmpeg.garage.detect ERROR : Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
ffmpeg.garage.detect ERROR : Error reinitializing filters!
ffmpeg.garage.detect ERROR : Failed to inject frame into filter network: Function not implemented
ffmpeg.garage.detect ERROR : Error while processing the decoded data for stream #0:0
ffmpeg.garage.detect ERROR : Conversion failed!
It looks like there are some problems because of changing pixel format. We probably need to add -vf hwdownload,format=nv12
or something like this (see https://trac.ffmpeg.org/wiki/Hardware/QuickSync, Full Examples).
Also I found similar issue for hevc (h265) codec, but not sure how to select proper parameters for my case. https://trac.ffmpeg.org/ticket/7691
My current ffmpeg config
ffmpeg:
global_args: -hide_banner -loglevel info
hwaccel_args: -hwaccel qsv -qsv_device /dev/dri/renderD128
input_args: -c:v h264_qsv -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
output_args:
detect: -c:v h264_qsv -f rawvideo -pix_fmt yuv420p
Hi @ekos2001
I have different issue, have a look at the github issue i have opened
My issue is probably different because in my case i have cut off the detect portion.
I am currently focusing on just creating a clip.
Awesome! Iām glad the docs worked. Iāll PM you about getting search set up, if you havenāt looked into it already
@blakeblackshear
Can you please share the correct hwaccel arguments for an AtomicPi? I canāt find out what generation it is.
Is there also a way to verify that the correct driver is in use and that the GPU is actually used? I think it should work with intel_gpu_top?
Right now, I am using the following arguments, but I have mixed luckā¦ The AtomicPI sometimes just hangs. Not sure if it is related, but I want to make sure my frigate setup is correct.
hwaccel_args:
- -hwaccel
- qsv
- -hwaccel_device
- /dev/dri/renderD128
Thanks
Is it possible to run the docker on another machine and still use hass-integration?
Yes, thatās how I am doing it.
Run the docker wherever you want, use hacs to integrate frigate into HASS
Used to run it localy on my NUC but, with no coral it uses about 80% of my CPU.
Iāve installed FRIGATE on my old laptop i7 but the hass-integration is not able to find the frigate anymore. Does this mean that I have to create all of the sensors and cameras manually?
Edit: typoā¦
Youāll need to re-configure the integration in HA to point to the IP of your other machine
Allrightā¦ Now letās find whereā¦
All help is appreciated.
In Home Assistant: Configuration > Integrations, then remove Frigate and re-add it
Ofcourseā¦ That did the trickā¦ Big thanks!
RC4 is published. Hopefully this wraps up the release.
Anyway to integrate the new UI into a Lovelace dashboard view?
Does documentation is up the date? I tried automation in HA, but trigger does not work and with MQTT explorer I didnāt see any āfrigate/eventsā topic on object detection. Set logging to:
logger:
logs:
frigate.mqtt: debug
also do not show any information
It is up to date. Do you have detection enabled?
Yes, I think so. My config:
detectors:
coral:
type: edgetpu
device: 'usb:0'
mqtt:
host: core-mosquitto.local.hass.io
user: *****
password: ****
ffmpeg:
hwaccel_args:
- '-hwaccel'
- vaapi
- '-hwaccel_device'
- /dev/dri/renderD128
- '-hwaccel_output_format'
- yuv420p
cameras:
hik1:
ffmpeg:
inputs:
- path: 'rtsp://user:[email protected]:554/Streaming/Channels/102'
roles:
- detect
- rtmp
- path: 'rtsp://user:[email protected]:554/Streaming/Channels/101'
roles:
- clips
- record
height: 360
width: 640
fps: 6
zones:
trepp:
coordinates: '208,200,368,139,321,66,206,63'
parkimine:
coordinates: '185,360,577,360,394,137,123,258'
clips:
enabled: true
record:
enabled: true
retain_days: 2
snapshots:
enabled: true
detect:
enabled: true
max_disappeared: 10
clips:
tmpfs_cache_size: 256m
retain:
default: 5
objects:
person: 10
snapshots:
retain:
default: 5
objects:
person: 10
logger:
logs:
frigate.mqtt: debug
frigate.edgetpu: info