Hey everyone.
I’ve been thinking about this for a couple of weeks now.
The problem is that natively, Frigate can’t use RTSP streams from wifi & battery powered Eufy cameras. The way Eufy make them work is that the camera or homebase will only send the feed when a motion is detected. It’s been well documented that there is no out-of-the-box solution to integrate those streams with Frigate (I remember reading dozens of questions/articles/stackoverflows about it, with the same conclusion every time: Frigate is not meant for that).
Now what I’ve been researching is a way to have something ingest the streams from the cameras that could “fill in the gaps” (no motion detected), and be constantly streaming something to Frigate.
I do have something working, but not ready yet for immediate consumption (probably needs some packaging).
Here is how it works, at a high level: Eufy → ffmpeg → Nginx w/ RTMP module → Frigate
Basically, Frigate is reading from nginx which acts as a RTMP proxy, and ffmpeg is used to send either a blank stream, or the camera’s stream if available. The latter is done via a simple (Ruby) script that reads the camera feed every second.
Detailed steps to reproduce:
Run nginx: docker run -d -p 1935:1935 --name nginx-rtmp tiangolo/nginx-rtmp
Run the Ruby script (it’s alternating 2 static color videos for testing purposes and trying to connect to the camera’s feed every second)
How much responsive is this when watching the livestream? We can incorporate this idea to my eufy security integration
Let’s say, you are watching live stream over frigate at t second and motion is triggered at t+1, how long it takes until you receive the rtsp stream into frigate?
Based on my observations, I’d say around 1 sec (take it with a grain of salt for now). It doesn’t take much to replicate, but I could try to record something tomorrow if you want.
(as one of the frigate contributors) one thing that needs to be kept in mind is that frigate keeps a background averaged for motion detection so frigate knows where to look for objects. In frigate0.13 when there is a major change in the background (in cases where a camera switches between color and IR mode for example) frigate goes into calibration mode where it waits until motion settles before looking for objects.
In this case if by “blank frame” you mean a single color this means any time the camera sees motion and switches to a camera stream frigate will not run detection at least for a second or two which may cause unexpected issues.
There is a similar project for aarlo cameras which sends the last frame in between motion bits so that way frigate is still running with a background that looks like the camera frame
Nice idea @crzynik , i have access to latest event image, so we can use it for idle frames.
On the other hand, this might not be an user issue for eufy cameras given that they usually have multiple type of sensors to catch motion, car, person, dog etc.
Thanks for sharing this @crzynik , that’s exactly the kind of feedback I was expecting.
I could definitively look in there and see how the repo owner sends the latest image with ffmpeg, or as @anon63427907 said, we could also take it from the events.
I’ll record something so people can see how reactive it is, and see if I can make the changes to add the still image directly.
Here is the recording. I’d say it takes roughly 1 sec from when the ffmpeg command is sent to seeing the real time update in Frigate’s UI. And when I switch the camera feed on and off in the app (not visible in the recording) maybe up to 2 secs (some of that is Eufy propagating the update from the app to the camera it-self or the homebase I’m guessing).
Thanks for sharing, is it possible to have recording around camera’s being triggered?
I am personally a bit worried on when camera is being prepared for streaming but not get ready yet, ffmpeg might hang for couple of seconds before starting to read stream or move into next frame.
One more thing, what would be the cpu consumption if you would run this on pi or some other lightweight hardware?
I could send you the recording to you privately, as I don’t want to see the outside of my home and waste a lot of time moving the camera etc. How can I reach you?
CPU consumption
No idea, for now I’m running it on my 2016 MacBook Pro. I think someone’s going to have to try it on a Pi to see how it performs.
To improve reactivity, I could also explore having a continuous ffmpeg process and make the switch via a pipe. I think it’d be faster because it wouldn’t have the overhead of starting ffmpeg when the script detects the camera is live. Maybe I can try that next!
I’m still working on this. I had to reimplement part of the RTSP protocol (yay…) and something to receive RTP packets. However I’m not receiving any data from the camera (after doing all the RTSP handshakes with describe/setup/play). Any thoughts?