I think I might go back to CPU version for now… Which would be the best commit to pull to do that?
I’m not sure what would be different about the capture process. You are running it on different hardware right? I would checkout the v0.0.1 tag.
Originally I had tried the coral version on the same laptop I had the CPU version running successfully on an ubuntu server instance under VMWare. I made a clean install in exactly the same way and put the coral version but had similar problems to those I am having now on my synology which I moved to this week, where it would work for some time before no longer sending the streams with bounding boxes or best_person…
I will try a few more tests like running only a single landscape feed with no mask off the low res subfeed and see how long it can run like that on the synology, then if it doesn’t work, do the same test again on the vmware instance before giving up and going back to the CPU version for now.
What type of cameras are you running? I’m running on Synology and seeing the same behavior only with my hikvision cameras, my axis cameras are running stable and reliable - the frigate camera endpoints stay up for Axis but the hikvision is going offline after ~2 days. I’m thinking I may set something up to reboot the docker nightly to prevent this from happening. I’m also not running a mask.
Okay so it’s not only me, Good to have confirmation of that. I am also on synology+hikvision. Model of the hikvision isDS-2CD2620F. Synology is DS918+. I have tried both streams from the camera and the stream from the synology and they all fail after 12-24 hours.
The CPU version was able to run for weeks without problem… So that’s maybe an answer for now. Maybe the new graphics libraries will fix it in the Coral version.
I wonder if we would have better luck with different codec or other stream settings?
Do you think there would be any way I could send you a day of recorded raw camera feed and you could stream it back into your dev environment and get some idea what might be happening here? Or maybe we should wait for the new video libraries before doing further analysis?
I’m running a DS1819+ with a hikvision DS-2CD2142FWD and a M3104 Axis camera.
Before the Hikvision went out I saw a bunch of these in the logs:
last frame is more than 2 seconds old, restarting camera capture...
Terminating the existing capture process...
Creating a new capture process...
Starting a new capture process...
Opening the RTSP Url...
and
[h264 @ 0x1bed8e0] error while decoding MB 34 11, bytestream -16
[h264 @ 0x1cdf860] error while decoding MB 34 11, bytestream -8
My cameras are old so I wouldnt be shocked if I have a hardware issue with them, was planning to replace all soon with axis or dahua.
Here is the stream setting for the hik - i used the sub stream vs the 4MP stream and was able to get CPU under 10% (was previously 30%):
I think it would be unlikely to generate the same errors. Let’s see how the ffmpeg version works when I am done. That will give way more options for customizing the parameters. I have been swamped at work for the past month, but hoping to finish out the next release soon.
I tried with this:
and this:
and this (using the share stream path capability of synology)
Same results on both machines I tried it, very similar to what you report except without the “error while decoding” errors. My cameras are less than 2 years old and are direct connected with ethernet cables, so I don’t think it’s a hardware problem, I think it’s just a compatibility issue.
@blakeblackshear, okay will hold tight for ffmpeg. Thanks. In the meantime, do you suppose it would be happier with something fixed bit rate or non-h.264?
I use a fixed bitrate h264 stream on my Dahua cameras.
Running another test with fixed h.264
@blakeblackshear Is there a way to use this with a Coral attached to RPi and use the RPis own camera? I want to place multiple Pis around the house, but don’t want video to ever leave the Pi/be on the network for privacy (just an MQTT signal of presence).
//Tomi B.
I just pushed up a new beta release. I marked this a beta because it may not work for all cameras and/or hardware. The new image is available with docker pull blakeblackshear/frigate:0.2.0-beta
, or you can build it yourself.
- Video decoding is now done in an FFMPEG sub process which enables hardware accelerated decoding of video streams. For me, this reduced CPU usage for decoding by 60-70%.
- New
take_frame
option to reduce framerates with frigate when the camera doesnt support it - Added the area of the object to the label to help determine min_person_area values
- Greatly reduced Docker image size, from ~2GB to 450MB
- Added support for custom Odroid-XU4 build (unfortunately, I wasn’t able to get the Coral performance to be good enough for me with this board)
- Latest Coral drivers from Google
- Added a benchmarking script to test inference times
- Added some comments to better document config options
That’s possible. You would need to alter the ffmpeg parameters in the latest version to capture the video feed from the camera and convert to rawvideo rgb24.
Hi, thanks a lot for your hard work. Have tried the FFMPEG image with the all the documented parameters on an Intel i5-4200U and can see only a little change in CPU usage. Do you have handy any commands used to get the supported CPU parameters ?
Frigate outputs the ffmpeg command to the logs on startup. You can start with that. This will only reduce CPU usage for decoding the video stream from the camera. If you are using a low fps and low resolution stream with a single camera the impact won’t be that much. If you are decoding many 1080p streams at higher FPS, it should make a big difference.
Working great for me so far, done some testing on all of my cameras and it is flawless!
Works great - thanks for posting. Seeing a slight reduction in resources.
Started work on 0.2.1 this morning. I just pushed a new branch up that sends snapshots over MQTT. This should allow much faster notifications when you want to send an image, as you no longer have to wait for the camera to poll frigate for a new image. I also included an example for an automation to send a telegram notification. https://github.com/blakeblackshear/frigate/tree/mqtt_camera
Realized that I was overcomplicating it. There is no need to save a snapshot with homeassistant. I can just pull the best_person.jpg
directly from frigate. I updated the automation example.