Works with nuc without problems
Can someone please explain how rtmp is used?
In particular I am trying to find a way to have the least possible lag in the video. When I open the camera from the frigate integration, I have about 1 sec of delay. But when I open from the picture element that I placed in my lovelace, I have around 10 seconds of delay.
Can RTMP help on that?
I have it all messed up in my head!!!
Did you happen to figure out if this would work? Iām curious too since I have a board that has a similar slot. Iām guessing it wonāt since I donāt think it has the right PCI-E lanes but may be worth a try.
You could try the alexxit webRTC card
@blakeblackshear A very big thank you for your job. Well doneā¦itās an amazing realtime person detection.
I have a docker home assistant installation in a NAS (OMV) and I installed frigate in a different server than Home Assistant.
I installed the blakeblackshear/frigate-hass-integration and it created so many sensosrs, but most of them, all the objects sensors are always unanvailable.
what did i wrong? (no evident error in logs)
Thank You
my card is on the way
Home assistant and frigate must be connected to the same mqtt server
Wow great thank youā¦ it was just that.
iām migrating my mqtt server and using the new one with proxy gtw to old.
Now i changed the frigate one and it works!
Thank You!
Where is more documentation available for this section:
# Optional: Zone level object filters.
# NOTE: The global and camera filters are applied upstream.
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.7
in here:
What is the correct syntaxt in config.yml to apply a mask to an object?
e.g. I want to mask a parking place only for object car and not for persons etcā¦
Their is mask maker in Frigate UI>>cameras>>select camera live view>>show options>>mask and zone creator
You can create mask/zone and then copy/paste into your config. Just correct spacing
Sorry that is not an answer to my question. This part I already hadā¦ I asked how to get the mask in for a typical type of object and then ask for an example with correct spacing.
example for āpersonā in documentation for āobjectsā
Yes, I have seen that but there is no information I can find for the filter:
part
(see my other question Local realtime person detection for RTSP cameras - #4686 by sender)
And this example is not clear for me regarding this part of my question:
So if this is my camera config:
schuurachter:
ffmpeg:
inputs:
- path: rtsp://admin:[email protected]:554/Streaming/Channels/101/
roles:
- detect
- clips
# - rtmp
width: 2048
height: 1536
fps: 4
snapshots:
enabled: true
crop: false
bounding_box: True
objects:
track:
- person
- car
- dog
- bicycle
- cat
motion:
mask:
- '443,106,775,108,679,365,294,371'
- '2048,371,2048,585,1136,401,1090,130'
clips:
# Required: enables clips for the camera (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: True
# Optional: Number of seconds before the event to include in the clips (default: shown below)
pre_capture: 2
# Optional: Number of seconds after the event to include in the clips (default: shown below)
post_capture: 2
# Optional: Objects to save clips for. (default: all tracked objects)
objects:
- person
- car
- dog
- bicycle
- cat
# Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Per object retention days
objects:
person: 100
car: 50
dog: 50
cat: 10
bicycle: 50
how would that look like to have cars
not detected in the masked zone but do have person, dog, bicycle and cat
in it?
filter is where object specific config parameter are placed
In this case I donāt think there is further detail to be provided since those items are documented in camera level documentation
objects:
track:
- person
- car
# Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object.
# NOTE: This mask is COMBINED with the object type specific mask below
mask: 0,0,1000,0,1000,200,0,200
filters:
car:
min_area: 5000
max_area: 100000
min_score: 0.5
threshold: 0.7
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0,0,1000,0,1000,200,0,200
I do not get that documentation. I canāt get it working because after reading it I do not know where in what order and indention to place. How do I split the
objects:
track:
- person
- car
- dog
- bicycle
- cat
to only have car
match the mask?
A mask prevents either motion, or an object, being detected in a masked area. It is not a āmatch only this in the masked areaā mask.
There is no āinverted maskā, as far as Iām aware, youād have to mask all other objects, all around the area you donāt want them to be detected in.
So, to NOT match a person in 95% of the image, youād have to mask person in 95% of the image.
I understand. But what do I practically
need to do to:
to prevent detection of cars in a parking space
to not prevent and other object detection on that parking space
Currently on every āmotionā, wind, light, clouds I have lists full of cars that are parked.
And please help me with the yaml layout.
To prevent the car being detected in the parking space, create a car object mask so it covers the bottom-middle of the bounding box. Detection is based on the bottom middle point of the bounding box, so you need to mask this.
In my case, I mask about the bottom 10% of my garage, with a car maskā¦ this means it can be detected as it approaches the garage, but not when it is parked (because when it is parked, the bottom middle of the bounding box is in the bottom 10% of the image)
Sorry I dont get thatā¦ do you have a picture of it?