It is awesome we can stream video via a camera component now, but let’s face it, HLS sucks for livestreaming video (security cameras). HLS chunks segments into keyframes and then pre-loads ~3 keyframes. On my Unifi Protect cameras, that means I have a ~20 second delay when I load up a camera. If I manually jump ahead, I can get it down to ~5 seconds, but that is still a noticable delay.
The issue was even mentioned in the original RFC for adding streaming capabilities to the camera component. It would be really great if we can get something new integrated into core that RTSP / custom integrations can use to livestream in real time instead of on a delay.
Some nice to haves would be:
- The ability to use a playback method that has less latency then HLS, like Low Latency HLS or WebRTC.
- Ability to transcode video and/or audio on the fly if necessary to ensure video and audio playback is supported by the HA frontend (Unifi Protect cameras playback on H.264 and AAC audio, which is great for HLS, but the audio does not work for WebRTC).
- Stream pooling. Because transcoding is expensive, a stream connection should be reused when able to reduce load on server.
I know that a very good point was made in the original RFC streaming in the camera component:
We are a home automation platform, not a video management platform.
And so to that end this might be better off as an official add-on instead of inside of core. And if the add-on is installed, the camera component is just able to use it. Maybe that means we could find an existing video management solution to integrate as an add-on and use it by core to offload all of the streaming.
Existing third party integration that “kind of” works:
Though, I am not a huge fan of how it currently works. It downloads random executables from the Internet and runs them on the system and it has a nasty CPU issue that can bring down your whole Home Assistant instance. Perhaps the correct “solution” here is to make that integration into an Web service add-on that starts up the streams and/or transcodes video/audio when necessary to feed them to the WebRTC player.
EDIT: beta LL HLS support was added in 2021.10. Going to still leave the feature request as open but for WebRTC only since LL HLS does still have a few seconds of latency.