I tried all the camera platforms so you don't have to

Synology I guess Surveillance station

No I don’t have a need for snapshots currently. The only snapshots I need are sent via MQTT from frigate to show me who it found outside in the MQTT streams.

I can’t promise that my results will apply to everyone’s situation the same, but hopefully they at least give you some idea for the best things to try first.

I do have stream integration activated, but yes, don’t use it for this since it works great without it. I use camera here and if needed use that integration to stream to other devices from HA. (for ex. on event trigger push camera stream to TV or something).

Thanks for the clarification.

Interesting, I wonder what the difference is. Probably something on the synology side. Are your streams h.264? Anyhow the substream only nature of it limits its usability unfortunately.

@scstraus, really interesting writeup. I did my fair share of testing various camera integrations myself (I also have multiple Hikvisions, although most are 4k, so that makes the problem worse), but by far not as exhaustive as you did. Great read.

I agree, camera integration into HA is highly problematic and the lag introduced by the stream component is a major issue, going as far as making the entire camera integration useless. But you can’t really blame the component for that. The problem is really inherent to how HLS works. It just wasn’t designed as a realtime low latency protocol to begin with. Interesting find about low latency HLS though. That would really help in the future.

Another idea would be to forward the RTSP stream directly, instead of remuxing it to HLS first. But as far as I know, there’s no native browser support to decode that on the client side and you would need to install some third party decoding plugin (like that really sketchy exe the Hikvision web UI always prompts you to install when you try to view the stream). It may be workable to open a fullscreen VLC session when tapping the camera preview card, which would pull the forwarded RTSP from the HA websocket connection. That would probably bring the latency way down, but it might be awkward to integrate.

Another important thing is h.265. A lot of the higher end camera setups are heavily shifting towards it, as it really reduces bandwidth and storage requirements significantly. Unfortunately, as far as I know, there is no support for that on HA yet.

Edit: oh and there’s another thing: Fully Kiosk has a bultin fullscreen RTSP viewer. Could that be leveraged to pull an RTSP stream forwarded directly by HA, bypassing the stream component / HLS remuxing entirely ?

5 Likes

Great writeup. I might finally look at moving my Zoneminder setup over to something natively using HA, in combination with blakeblackshear’s real time object detection.

I’m also using Synology to record 24x7 from the cameras. Have you looked at using substreams in HA while recording the main stream in Synology? I’ve been toying with ways to use the substream for analysis and then grab a high resolution video clip from the Synology recordings.

Yes, it’s h.264

Love the idea of full screen RTSP in fully kiosk, didn’t know about that!! Going to start investigating!

1 Like

Yes I’ve tried lots of combination of substream and main stream between frigate, synology, and hass. TBH none of them had a major effect other than sub streams coming up a bit quicker and using less CPU in cases I had to transcode them. With h.264 the camera doesn’t seem to mind much which stream you take. It’s only MJPEG from the camera that causes problems.

These days I just use whatever stream has the best resolution and aspect ratio for what I want to do with it.

Hi, forgive my noobness to HA here
I’m planning my setup so I find this thread very interesting.
I’ll most probably host HA as a VM on a qnap nas, which should also save the camera streams.
Now I haven’t decided yet if I will go with unify or generic Chinese cams (those clunky plasticky big boxes honestly look awful in a newly renovated apartment imho)

Now regarding the tech, I’ve got some questions
Is it possible to open iframes to the cameras directly? that would open a socket to the camera page itself bypassing any additional component

Another interesting thing to do, would be to stream the zoneminder/sinology/qnap/shinobi/ispy/whathaveyou overview page straight to HA
Is that doable/worth it?

Incoming

Go Hikvision turret cameras. They are nice and small and quite well priced

Yes, but iFrames will only work with the same encryption you view HA with. ie: if you view HA with HTTPS, then the cam feed needs to be encrypted too (HTTPS). Or they both need to be just HTTP.

1 Like

We’re all in it together.

Thanks for doing all this work! Really appreciated.

I think part of the delay you see when the stream component is enabled is a deliberate feature, not completely a bug. I notice that when popping up the “live” view of a camera, you can drag the scrub control at the bottom of the window all the way to the right, and end up with a shorter delay, maybe 5 seconds? I’m not certain and I might vary with the I-frame interval as well.

It would be great if you could skip this if you didn’t care about seeing recent history. I believe the buffer is there so that upon some (motion detection, etc) event, you can start recording 10 seconds ago. That’s a really great feature, but perhaps not what’s expected in a live camera view.

I opened a WTH does a streaming camera display live video 15 seconds delayed by default post to this effect as well.

1 Like

As @HeyImAlex points out, HLS will have latency by its nature. We are taking a stream and packaging it up into segments which get served over http. Since segments are discrete there will be a latency of at least one segment length. Then the player itself might buffer another few segments. Our implementation of HLS already uses a non-standard fragment duration (Apple recommends 6 seconds, we use 2 seconds). It will probably be hard for us to get to less than ~4-5 seconds of latency using HLS. That might be reasonable if you’re viewing the stream remotely. If you’re local and you want to reduce latency, you can probably use other methods which may have less latency but use more bandwidth (MJPEG) or require other TCP/UDP ports (just use the original RTSP stream).

The Low Latency HLS link you’ve shared is not just about tweaking/tuning. It’s an extension to HLS which involves a different encoding/http pipeline (it needs to produce encoded chunks before the whole segment is done muxing/encoding and it uses http/2 to send these chunks as soon as they are available). Unfortunately the pyav library we use to access libav won’t make it easy to do this. Edit: The PyAV package we use to access the ffmpeg libraries doesn’t make it easy to do the chunked encoding part, but we may be able to work around that. A bigger issue is actually http/2, since aiohttp does not look like it will support that anytime soon, but it looks like the http/2 requirement may have been relaxed. Anyway, as @hunterjm pointed out, ll-hls player/browser support is not really there either. We may revisit this sometime in the future, but probably not anytime soon.

Note - one significant source of lag can come from the use of H264+/H265+. These non-standard codecs reduce bandwidth by significantly reducing the number of keyframes sent. This ends up increasing the segment duration which increases latency. Also since they will cause the segment durations to vary and differ from the target segment durations, this will affect things like the lookback period in stream recordings. We should probably recommend against using these if latency is of any concern.

4 Likes

Thanks Justin, it’s always really valuable to hear the real technical reasons behind the issues, it makes understanding what’s actually happening and responding to it in the right way in our configs much easier.

When you say “just use the original RTSP stream”, is there a way to use the stream component while still pointing to the original RTSP stream? The only alternative I know to the stream component is to repackage as MJPEG and that takes a lot of CPU for a h.264 stream.

I meant maybe look for another way to display the RTSP locally, such as what @HeyImAlex suggested with Fully Kiosk.
The problem with latency comes from trying to get something from a packetized stream (RTSP) into a format which we can serve over HTTP (HLS). The latency comes because we have to batch everything into segments before we send it. We can get around this by using MJPEG since each picture can be like its own segment, but as you noticed, this takes a lot of CPU and bandwidth (it essentially sends a stream of pictures instead of a compressed video - you lose all the bandwidth benefits of compression across time and you also have to use CPU to convert from video to pictures). Going straight to the RTSP source gets around the latency limitations imposed by trying to batch everything to send over HTTP - you have one connection continuously open so you don’t have to send everything in chunks.
Low latency HLS is probably the right path for us, but that will take quite some time.

Great writeup. I also have a couple of Hikvision cameras, most of them 2mpx, and pretty much stuck to the generic camera serving the last picture snapshot once every second.

It works for a couple of seconds at full resolution and then goes blank.

The ffmpeg camera never worked for some or other reason on my hikvision. Will give ONViF a try.

I also use frigate for person detection but still rely on normal snapshots to take pics of other events (such as the gates outside opening).

Also @scstraus, I did some further research into displaying rtsp natively on Fully. Sadly it doesn’t work too well. Fully uses the Webview rtsp decoder supplied with Android, which is apparently very picky about what kind of streams it accepts. And it doesn’t seem to like my Hikvision streams, it just flat out refuses to open them. With an old DLink DCS-2330-something I still had lying around, it works perfectly. Go figure. Then again ffmpeg also seem to have a lot of issues with Hikvision streams, I often get image corruption when decoding recorded footage pulled as rtsp from the nvr. Not too sure who is to blame here.

Anyway, I got in touch with Alexey from Fully, and this is what he had to say (hope he doesn’t mind quoting him here):

Basically, opening an RTSP URL in browser should do the job if the Play Videos in Fully is enabled. However the RTSP stream format may be incompatible with those that are supported by Android. For my experience some camera streams a re working, others not.

Supported media formats  |  Android media  |  Android Developers

1 Like

The only web native true real-time streaming system at the moment is WebRTC.

In theory it would be possible to have home assistant extract the raw H264 data from the RTSP stream and represent it as WebRTC for a browser to consume. That would get very low latency, and fairly low server side cpu (the mandatory encryption accounting for most of it). The big problem is that it is a peer-to-peer style protocol over UDP, and will likely need at least a STUN server in remote access cases.

The complexity of setting all that up would be pretty remarkable.

1 Like