I said trigger, but I mean snapshot. I’m interested in having Frigate take a snapshot when a car drives along the road only. So are zones still needed for this or can a motion and object mask accomplish this? This isn’t HA related. I just need the snapshot saved to a folder. While I"m thinking of this, can the snapshot folder be changed from /media/frigate/clips?
Just for clarification…
Why have motion if there’s no use of it?
If Frigate only is for detecting objects all the time, why use motion to trigger an event?
I mean, if motion triggers a event, why should it continuously search for objects even when no motion is detected?
Why not use motion to trigger object detection and then when no motion is detected stop searching?
Sorry for the stupid questions… Just want to get a grip of what Frigate is.
The clips folder cannot be customized, but you can mount whatever path you want from your host os to that location inside the container with docker.
You can use an object mask to prevent cars from being detected in areas other than the road. Zones dont prevent clips or snapshots from being created, they just track whether or not the object enters the zone.
Frigate most definitely uses the motion. The best way to see this in action is to use the live view in the web ui.
First, turn on motion boxes. This will create a red bounding box around any areas motion is detected.
Next, turn on regions. You will see green boxes that indicate where Frigate determines object detection should be used to look for objects. This is calculated based on the bounding box of the known object in the previous frame and how it intersects with detected motion. This helps it follow objects as they move frame to frame.
Last, turn on bounding boxes to see the actual detected objects.
Frigate tries to maintain the same object id for a tracked object. If it was to stop looking when motion stopped, the same object would later be picked up with a new id if the object stopped moving briefly. I need allow objects to stop moving and assume they are still there to maintain a consistent object id for tracking.
Does the integration-provided stream work for others in HA?
Using 0.8.0 and the latest integration, when I click the frigate camera entity to bring up the streaming window it will not stream for me. If I close the window and try a few more times I may be able to get a 6s segment to play. I’ve tried through my normal path through traefik, direct to home assistant, android 11 (chrome and ha app), pc, chromebook, etc.
I’m still testing to see if its a frigate integration camera thing or an ha 2021.2 thing. Just seeing if I’m alone.
Oh, I just tried the rtmp stream directly in vlc and it seems to only play a 6s segment as well once it loads (10s). (still testing)
edit: well, step1 - restart. I noticed I wasn’t getting events so something must have been going on. I restarted the frigate container and streams and events work again.
Thanks for the explanation Blake!
One last issue I have. Its morning again and I’m getting false triggers with long shadows that I mentioned yesterday. Here is the current mask and you can see where the person was detected with the bounding box. Again, this only happens from approx 7am to 9am during sunrise. So does the mask need to be made a big larger? Is it detecting the shadow first and then jumping to the people inside the mask ?
That mask is for motion. The shadows are being detected as motion and telling frigate to look for objects. You want to use an object mask to mask out object detection in that area.
Perfect. I ended up created two identical masks: One for motion and the other for object. They cover the same area of the sidewalk since they are identical. I take it that is correct and ok to do or do?
My goal is I don’t want pedestrians on the sidewalk to cause motion alerts. I only want triggers if they actually come into my driveway. Its a somewhat unique scenario since those shadows cause movement during a certain time period that is outside the motion mask.
Now I understand the difference between motion and object masks. That motion (shadows) outside the mask causes Frigate to look everywhere for the object type being detected (person
in my config) It just so happens that those shadows are for the object person
so Frigate sees the motion shadows outside the mask, and then see the person
object moving across the FOV and begins tracking. By using an object mask, I’m telling frigate to ignore that person
in the masked area totally. Man, it took me a while to wrap my head around that.
Here is a snippet of the dual masks I setup.
motion:
mask: 0,129,0,0,703,0,704,282,578,234
clips:
enabled: True
pre_capture: 5
objects:
track:
- person
filters:
person:
mask: 0,129,0,0,703,0,704,282,578,234
snapshots:
enabled: True
bounding_box: True
retain:
default: 10
I’m seeing some ffmpeg errors in the log every once in a while and could use some help deciphering them.
watchdog.frigate_driveway INFO : No frames received from frigate_driveway in 20 seconds. Exiting ffmpeg...
watchdog.frigate_driveway INFO : Waiting for ffmpeg to exit gracefully...
frigate.video INFO : frigate_driveway: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : frigate_driveway: ffmpeg process is not running. exiting capture thread...
watchdog.frigate_frontyard INFO : No frames received from frigate_frontyard in 20 seconds. Exiting ffmpeg...
watchdog.frigate_frontyard INFO : Waiting for ffmpeg to exit gracefully...
frigate.video INFO : frigate_frontyard: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : frigate_frontyard: ffmpeg sent a broken frame. read of closed file
frigate.video INFO : frigate_frontyard: ffmpeg process is not running. exiting capture thread...
Running the 0.8.0 RC6 addon in a VM running HassOS 2021.1.5 and if I go to the frigate webpage and look at the debug page it seems to still be running fine. I even went and walked around my driveway to test and it saw me and I got my notification so it still works.
In my config I have this as I have a gen 10 intel cpu:
ffmpeg:
hwaccel_args:
- '-hwaccel'
- qsv
- '-qsv_device'
- /dev/dri/renderD128
Nothing else for ffmpeg that isn’t default.
I don’t use clips or recordings as I have an NVR doing 24/7 recording already. I just use this for notifications for a person or car.
Is there anything else I should look into?
Thanks
No harm in making them the same.
With only a motion mask, frigate will ignore motion that originates in masked areas, but when object detection runs objects may still be picked up in those masked areas AND tracked objects will still be followed into masked areas.
With only an object mask, frigate will detect motion in those areas and run object detection. Objects will be detected, but they will be ignored as false positives.
Your use case is actually what zones were designed for. I would recommend creating a zone for the areas where you want to be notified and using a condition to only send notifications if the object has entered a zone. This allows frigate to identify and start tracking objects anywhere in the frame and notify you the instant they cross into the zone. It takes a few frames of consecutive detections for an object to reach the threshold value. With this approach, when a person is walking by on the sidewalk, frigate will identify and follow them. The moment they take a single step up your driveway, you will be notified. The downside is that you will get clips and snapshots regardless of whether they enter a zone (until I add the feature to limit creation).
The downside of your current config is that when a person steps on your driveway, only their foot will be detected as motion and frigate will run detection on that area only (with some padding). That may not be recognized as a person. Only once they enter enough of the unmasked area will frigate see a person, and then it will take at least 3 (probably more) frames for it to be a confirmed positive.
You can try increasing the loglevel for ffmpeg to see what its output around the time the camera stops responding. You may need to tweak some params for ffmpeg. The next version (0.8.1) will include some improved log output from ffmpeg any time the process exits.
That is what I thought and why I decided to not use the zone feature. I only want those snapshot saved when they cross into the driveway. This particular use case I don’t need MQTT based alerts. I just want the snapshot dumped into the folder and then I’ll use a bash script to move it out and into a SAMBA folder for viewing. So its important that the snapshot created is actually the one I want.
I use the double mask for now and see how it goes.
At least with my cams, where audio streams are ‘μ-law’, just removing ‘-an’ didn’t work. I had to add ‘-c:a aac’ instead to the recording streams.
This project is nothing short of breathtaking, looking forward to switch to the new version soon
Is there anyway to pull the snapshots
from a high resolution stream? I mean, if I’m using detect
with a lower resolution substream, can snapshots then be written from a higher resolution substream like one would use if they were writing clips
.
I guess I can detect
from the high resolution stream, but I thought I remembered reading how detection quality is actually reduced when pulling from a high resolution stream.
It’s not possible. The stream has to be decoded to create an image, and only the stream with the detect role is actually decoded by Frigate. Everything else is a direct pass through. If you have the extra CPU, you can use the higher resolution stream for detect. Detection quality won’t be reduced, but it won’t be improved unless you are detecting objects smaller than the model size.
Ok I understand. Thanks for that! I’ll play with detecting with a higher quality stream and see how that effects CPU usage. Last question, is this the proper way to run dual identical masks:
motion:
mask: 492,0,704,0,704,67,533,77,21,126,27,194,319,171,704,480,0,480,0,207,0,0
objects:
track:
- car
- person
filters:
car:
mask: 492,0,704,0,704,67,533,77,21,126,27,194,319,171,704,480,0,480,0,207,0,0
person:
mask: 492,0,704,0,704,67,533,77,21,126,27,194,319,171,704,480,0,480,0,207,0,0
I didn’t see an example anywhere and it seems to work but wanted to make sure.
That looks correct.
It appears the behavior of the snapshots is to take a different snapshot every time Frigate thinks a new event happened or does it also take a new snapshot when it updates its confidence level? Here is an example of a couple walking down the street. They didn’t stop and kept walking through the entire FOV. The snapshot bounding boxes show Frigate locked onto the woman 3 times and the man 2 times. So I got 5 snapshots of essentially the same thing. It seems to happen less for cars I guess because they are moving through the FOV quicker?
I guess there is no way to just get the snapshot from the highest level confidence OR the first confirmed object it sees no matter the confidence level instead of a dump of multiple snapshots?
Have the same issue too… At the moment i’m using delay in my notification automation to minimize this.
However, I still get notifications from the same object after a while…