I do a lot of this with my setup. I use ffmpeg to record 1 minute segments of my high resolution feed and some bash scripting to move those into YYYY/MM/DD/HH folders. Then I use a bash script to clear out mp4 files older than 2 weeks. Then I make that folder available through nginx so I can view the clips. The save_clips
feature was designed to capture videos as frigate sees them so I can collect false/true positive examples for testing. However, I did design it in a way that I can have frigate output events that can then be used to assemble mp4 āplaylistsā of the video clips I store with ffmpeg. As is, you could use ffmpeg to write full resolution videos to the cache directory and frigate will slice and assemble them for you. I plan to bring these things together so there is a āfrigate-nvrā project for storing and capturing camera feeds that works alongside frigate itself.
Hey @blakeblackshear. Lately Iāve been playing with the FFmpeg camera component in hass and Iāve been noticing a number of the problems that used to plague me and others in frigate happening (streams turning into a smear, streams shutting down and starting up again).
It seems you have done a really effective job of finding workarounds or a setup for whatever has been causing all the stream issues that people used to have (at least youāve solved all the ones I had)ā¦
I wonder if you would consider adding some of your accumulated knowledge into the ffmpeg camera component in homeassistant? I would love to see it work as well as frigate does so that I can stop using the stream component and generic camera in homeasstant which adds a 10 second delay to the stream.
You are really at the forefront of making all this stuff work. Thanks again for a rock solid component.
What are you trying to accomplish by switching to the ffmpeg camera component?
I have never used the ffmpeg camera because I donāt want to decode the video feed an extra time, but I am guessing some of the input parameters just need to be utilized to get similar results. The stream component will always have a delay because it is creating an HLS stream to reduce CPU load and devices like the chromecast only work with HLS streams. I force my cameras to have an I-frame every second so my delay is minimized.
I run a single instance of ffmpeg to generate an HLS stream and store 1 minute segments of my cameras for long term storage like this. Then I setup my cameras like this:
- name: Backyard Camera
platform: generic
still_image_url: http://default-frigate:5000/back/latest.jpg
stream_source: http://nvr-nginx/stream/back.m3u8
It is a bit wasteful that HA creates an HLS stream from an HLS stream, but it works well enough. My HLS stream has a 3-4s delay and HA adds another 1s delay (probably because my I-frame rate is set to same FPS) to recreate the stream again. Not too bad with a 5s delay, but I donāt view the stream often.
Thanks for sharing your expertise on this, Iāve been fighting with it for more than a year.
So my goal is trying to get rid of that 10-20 second delay. I also have an I-frame every second but the delay is still 10-20 seconds. I donāt actually need any storage or chromecast or any of thatā¦ All I need is cameras that show up properly in the UI without delay so that I can use them for video doorbell, monitoring the kids while they are outside and the like. When I use the generic component without stream turned on, I get near real time feeds but they donāt work well on my tablets or iOS devices. The feed often turns to a broken image.
FFmpeg ones seem much more reliable but I just noticed they are eating all my CPU so I canāt keep all the streams running properly. Probably this is where my problems are actually coming from rather than the component itself, I just didnāt expect my CPU to jump from 20% to 100% just by changing the camera component.
Your idea is a very interesting oneā¦ So let me understand, how do you run a single instance of ffmpeg? Do you have just one camera set as ffmpeg and the rest set as generic? Do you run the stream component?
I havenāt found any way to run the stream component without the lag, so Iām now trying to find a solution where I donāt use it but still get near real time cameras that will display properly in the UI and not use all my CPUā¦
I run ffmpeg independent of HASS in a separate container, then use the generic camera to point at the HLS stream from that camera. The stream component then makes an HLS stream from my HLS stream. I didnāt see any difference in latency with and without the stream component enabled.
Either way, itās not going to be as low latency as you want. I am eventually trying to achieve the same thing by using jsmpeg, but it will require a custom lovelace card and an addon with ingress at least. I also plan to add it as a capability in frigate so you can avoid decoding the video stream more than once.
At the moment, I donāt know if there is really a way to get low latency video feeds in HASS. It is just a fundamentally difficult problem with HTML5.
If I turn off the stream component, I am able to get what I consider to be low enough latency streams (~2 seconds) from hass using either the generic or ffmpeg component when they are displayed as (I believe) MJPEG in frontend rather than HLS. Framerate suffers a bit, but even .5-1 FPS is okay to me. But without the stream component, I have the problem with them not displaying reliably on some devices (generic camera) or using all my cpu (ffmpeg camera component).
Itās only when I turn the stream component on that the lag goes up to 10-20 seconds, but the CPU goes way down and they display nicely on all devices.
So, let me ask you. When you run your own ffmpeg container, and the generic camera component, do you get lower latency or lower CPU usage than you would running the FFMPEG camera in hass against the same cameras? Roughly how much latency do you get? Iām assuming you still need use the stream component in hass to deal with the HLS stream, right?
ATM it seems that my only way t is to get a stronger hass machine to run lots of FFMPEG on.
Hi, I have my cameras added with onvif integrationi added this in options camera and now donāt have delay:
-fflags nobuffer -flags low_delay
Nice work on the latest RC @blakeblackshear! Iām playing around with the clips and they seem to be working great so far. Does the cache directory clean itself up? Or should I be throwing up a crontab to clean this up periodically?
Decided to give 0.6.0-rc1 a spin, @blakeblackshear. Looks good so far with my 3 cameras. Would it be possible to set objects/filtering in the zone config rather than in the camera config? For example, Iād love to be able to have 2 zones set from the camera on my driveway and only detect a visitorās car in one of them (since my car is in the other āzoneā) but still detect a person in both zones. I used a mask on the whole camera in 0.5.x, but that meant I couldnāt detect a person on the other side of my car.
You can set them in the zone config like this. I think you should be able to accomplish what you want, but maybe I am missing something. You may not need to use filters, since you can use the new MQTT topics to determine when specific object types are in each zone. Also, zones can overlap any way you want. This feature really adds some new possibilities and you donāt need to be so restrictive with masks and filters anymore. Frigate will pick up and track all the object types in the entire camera frame and detect as soon as they cross into any zone while passing the zone specific filters.
I noticed that in the example config but Iām having a little trouble configuring zonesā¦ I tried configuring my own but decided to cut/paste the example config but just change the camera name to match one of mine:
zones:
# driveway:
# cameras:
# drivewaycam:
# coordinates: 1,129,317,66,375,156,640,276,640,360,1,360,4,3
# frontcam:
# coordinates: 1,357,346,246,3,206
front_steps:
cameras:
drivewaycam:
coordinates:
- 545,1077
- 747,939
- 788,805
But I get the following in the log:
Camera_process started for drivewaycam: 54
Starting process for drivewaycam: 54
Camera_process started for frontcam: 55
Camera_process started for doorbellcam: 57
Starting process for frontcam: 55
Starting process for doorbellcam: 57
Traceback (most recent call last):
File "detect_objects.py", line 436, in <module>
main()
File "detect_objects.py", line 276, in main
object_processor = TrackedObjectProcessor(CONFIG['cameras'], CONFIG.get('zones', {}), client, MQTT_TOPIC_PREFIX, tracked_objects_queue, event_queue,stop_event)
File "/opt/frigate/frigate/object_processing.py", line 80, in __init__
coordinates = camera_zone_config['coordinates']
KeyError: 'coordinates'
Am I missing something?
The example config is wrong.
Update incoming.
Note that the cameras:
key was unnecessary and I removed it. Your config should be:
zones:
# driveway:
# drivewaycam:
# coordinates: 1,129,317,66,375,156,640,276,640,360,1,360,4,3
# frontcam:
# coordinates: 1,357,346,246,3,206
front_steps:
drivewaycam:
coordinates:
- 545,1077
- 747,939
- 788,805
Great. Removed it and restarted, all is good. Now to wait until an object passes into the cameraās frameā¦
Thanks @blakeblackshear!
Wow. I LOVE the flexibility with zones. Super awesome feature!!! Thanks Blake!
What does the On Connect Called
Message in the log mean?
All of a sudden Iām getting a ton of them. And some of the MQTT detection topics seem āstuckā - I.e. doesnāt reset back to off even after the object disappeared from view.
Usually that means you have multiple instances of frigate connected to mqtt with the same client id
Indeed it was. Old machineās docker container restarted without me knowing.
Thanks Blake!
Does Frigate publish a jpeg on frigate/<zone_name>/<object_name>/snapshot
or is the snapshot only published at the camera level?