I tried all the camera platforms so you don't have to

Hi Pergola,

I just configured my HA to do RTSP camera feeds, and even got it to show up on my nest hub max, except the problem i am not facing is the interface is showing the message on the nest hub max:

“error: unable to connect to the home assistant websocket api”

you have any thoughts why this could be the case?

Thanks!

no idea, mine just works, using nabu casa here
maybe something with your internal/external url ?

I do try to contribute to the docs where I can. I haven’t found a case where they didn’t have the configuration example for cameras (because I usually fix those when I see them), if you have a specific one, let me know. As for many of the other things on this thread, I’m not sure they belong in the docs because honestly I don’t understand what’s happening behind the scenes well enough to make a definitive statement as to how these things work. I am just extrapolating based on the testing I’ve done and sharing my hypotheses here. I’d love to get confirmation from the devs on some of these suspicions, and then I’d feel comfortable to add them to the docs.

Oh man, now I’m really getting jealous. Can we just drop the code in from openHAB?

Yes it can, but unfortunately most of us are running supervised in docker, so even if we do go through the considerable work of compiling for our hardware, it will be overwritten every time we upgrade. I would like to see some way we could enable this as an option within supervisor and have it handle that part.

Maybe an addon could provide a custom compiled ffmpeg?

1 Like

In terms of actual video streaming (i.e. not mjpeg with high refresh rates), on basically everything other than iPhones (and older pre-iPadOS iPads), it is not hard to do better than normal HLS. The trick is the concept that LL-HLS calls partial segments, and CMAF calls chunks. These are basically much smaller fragments (moof+mdat) that don’t necessary start with or contain a keyframe. Chunks can be as small as 1 frame’s worth of samples. For LL-HLS you want them bigger, since one HTTP request is required per LL-HLS “partial segment”, (unless doing fancy things with byte ranges). Officially chunks/“partial segments” are required to consist of a subset of the video frames from a full sized segment.

I’ve done some experimentation using a slightly modified version of the stream component, set up to generate chunks of 100ms in duration (or at least one frame of video slower than 10 fps), and send them to the browser over websockets. The browser uses Media Source Extensions to pass this into a video tag (which is why iPhones and older iPads won’t work, since apple deliberately disabled MSE on those devices). Using that, I was able to get latency in a browser that is lower than using VLC to watch the RTSP stream with default settings. (By default VLC uses a 1 second buffer). Under this technique, latency is also independent of key frame interval, which only influences how long it takes to load the stream.

My experimentation was only with 1 camera at a time, and the code I used is not really suitable for merging into the stream component, since I took the easy path of breaking HLS support while testing.
To avoid breaking HLS support, I would need to create both chunks and segments, which is needed for LL-HLS anyway. Per the new HLS spec, it is legal for a segment to consist of concatenated chunks [0], so this is not necessary particularly difficult.

To do this right we would just need to render to smaller segments (which we label as chunks) based only on time (ignoring keyframes). On top of that track when we would want a new full segment to begin. At that point, as soon as the next keyframe is seen, force a new chunk to start, even if it is “too soon” per the 100ms timeframe. Keep track of which chunks belong to a complete segment.

When requesting a full segment via HLS, just serve up the concatenation of the chunks that occurred in that segment (without the “initialization section” of each chunk of course). Later when LL-HLS support is added, the chunks would become the “partial segments”.

For a low latency websocket connection, we would simply push chunks as they are generated. The first chunk pushed would include the initialization segment, all others would omit it.

Footnotes:
[0] If one is using sidx boxes, a top level index for the whole segment really ought to be made that points to the chunk level indexes, even though it is not strictly required.

3 Likes

Any chance to try it as a custom component?

Not currently. It is really hacky code at the moment, and I was using a custom web page as a front-end, since lovelace does not have support for this. This was more a test to see if low latency without LL-HLS was even possible, at least on non-iPhones. and it seems like the answer is yes.

If I find myself with some extra time soon, I might try to make a slightly more polished version, that could be testable as a custom component. The front-end part is is trickier though.

3 Likes

A million thumbs up for the ONVIF integration. It’s the only platform which reliable integrates HA with my Lorex NVR and cameras (unfortunately purchased before I got into HA or I probably would have taken a different path). My current limitations lie entirely with my lowly RPi 3.

1 Like

Yes that was my conclusion when I was trying out Low Latency HLS and DASH, that creating the streams is the easy part. The players Google/Chromecast and browsers still want to buffer the stream and this adds on the same amount of lag/delay so really all you achieve is the same result, but use a lot more http traffic, and with worse compatibility.
It was probably more than a year ago I tested it so may have changed.

No, but it is not difficult to implement (if it is not already possible). You can also setup a video server like Blue Iris to do the job for you. I really don’t like the idea of using a PI that is expected to run your automations and give a snappy UI, to also generate multiple mjpeg streams unless it is for only occasional use and only a single stream.

Thanks for getting them added, I have visited a few times over the years and it seems to be a lot better now.
How to get PTZ working? I do not see a single example in the ONVIF platform docs that show what to put in configuration.yaml or the steps to do it via the UI. Sure I see it being mentioned, but zero info on where to head to learn how to get it working. I’m slowly working HA in general out, but it would be nice to have a ‘how to get a camera working’ page that goes over all the platforms and when to use X over Y and how to get PTZ, how to cast and a few other common tasks covered. This is why your thread was great timing as it covered the different platforms, some of which appear to be overlapping in design.
I suspect some of my issues are from not understanding the basics and there being a gap from the beginners guide to being left trying to setup yaml based platforms.

Anybody able to get the high quality stream to show in Apple Homekit? I am able to get see the ‘main’ stream nicely in HA within lovelace and through the Home Assistant iOS App. However, when the camera is sent to Homekit, only the ‘sub’ steam plays.

What’s also interesting, is that the camera shows ONVIF in Homekit, however, I am using the generic camera platform.

@moto2000 forgive me if this is obvious but have you checked for disabled entities? When i was using ONVIF with my cameras it would install with only one feed enabled. FYI wasnt homekit but just a thought becuase I missed it :slight_smile:

I experience that cams images is missing once in a while… And also the love lace UI seems to hang on some requests…

Looking in chrome devconsole I can see that requests end up as cancelled… reason being stalled.
http://192.168.1.252:8123/api/camera_proxy/camera.haveost?authSig=xxxx

I believe this happens due to the fact that all 6 connections is in use between clien/server serving the cam feeds.
From Chrome event/debug:
SOCKET_IN_USE [dt=66597+] [ --> source_dependency = 151784 (HTTP_STREAM_JOB)]

I’m currently using camera generic platform with
still_image_url: http://192.168.1.202/ISAPI/Streaming/channels/301/picture

I have 7 cams…

Any recommendation on which camera: platform/configuration to use instead - that will not end locking up my entire Lovelace UI ?

I’m fine with updating only cam pics every 2 secs… no live stream needed.

Thanks.

1 Like

Any of the MJPEG options such as MJPEG, FFMPEG, Proxy, etc seem to work reasonably well with many streams, otherwise try the generic camera (or better yet ONVIF camera) with stream: enabled in your config.

thanks scstraus…

I have a NVR with 7 cams… And first of all should each CAM go through the NVR’s ip or to each seperate IP of the cam? (Proxy for example?)

Can you give an example on your config?

I found that using the ONVIF integration you can actually just link to the NVR and access all cameras, however in my case I didn’t get the additional features that ONVIF provides from the cameras directly such as motion detection signals etc. So I just use the direct integrations to each camera.

There are lots of different ways of setting it up like sparkydave mentions, and if you read my original post, you know they all have advantages and disadvantages. I learned that through trial and error over a couple years of messing with it.

In your case, I don’t think that we know what the issue is yet. There are lots of reasons why 1 camera out of 7 doesn’t show up.

  • General unreliability of general camera platform without stream (would likely be different cameras exhibiting behavior at different times)
  • A limit on max number of streams from NVR (probably would be same camera most of the time)
  • One poorly configured camera (would be same camera every time)
  • A limit on max number of streams from camera
  • A limit on how many streams the client can handle
  • CPU usage on server
  • CPU usage on NVR
  • CPU usage on camera

Each of these has different solutions. So the best thing you can do is try some different ways of setting it up and see what your results are. Try the live555 proxy server (I give my config further up in the thread). Try connecting directly to the cameras. Try different camera platforms like ONVIF which is quite good in my experience. You will get different results and eventually you will be able to triangulate the issue.

My cameras.yaml is here, but cameras aren’t that interesting in their config], my comments there are probably the most useful thing, but my comments in the original post are far more fleshed out and up to date. Everything useful to be said about my config is already in the orginal post and the thread.

Has anyone played around with cameras in HomeKit? Some of my notes:

  • When you use the HLS stream from Blue Iris, it loads instantly in HomeKit but experiences quite a bit of smearing. Also loads at full resolution, full quality, super high bitrate, and there’s no way to independently turn it down in Blue Iris (typical BI). There’s also about 7 to 8 seconds of lag.
  • When you use the RTMP stream directly from the camera, it takes 7 to 8 seconds to load in HomeKit, but there’s zero lag. And you can use the substream, and adjust it to whatever quality you like from the camera.

I wish I could get the RTMP to HLS conversion to happen all the time so it would load instantly. It seems the Preload Stream thing doesn’t start the conversion in the background, it seems like it just loads the RTMP stream.

Does anyone know how to start that → HLS conversion when HA starts and keep it running so HomeKit can just access it without any starting delay?

I have 3 Xiaomi Dafang hacked version (RTSP) and 1 Foscam camera (ONVIF). Tried multiple ways to integrate them to HA as suggested in this post but I still unable to get the best option. My lovelace UI glance card (live/auto) can’t show the image sometimes (grey out) and the stream is way delay (minutes to hours) even with preload stream.

So far I have tried generic camera+with/without stream, ONVIF+with/without stream. I have yet to try the FFMPEG as I think it’s kinda heavy for RPi4.

I’m not so sure whether it is because the hardware is not powerful enough (RPi4 with 4GB version) to handle the stream or is it because of HA.

With a Pi it can be tricky because it’s quite heavy on the slow bus and processor to work without stream and you might bring the whole thing to it’s knees, but with stream you will have a delay. You just have to test what’s possible. Try low resolution, low FPS streams if you want to turn stream: off.

@scstraus have you tried the new Frigate release, particularly the feature that @blakeblackshear to streams the camera feeds as rtmp feeds?

He primarily did this to reduce the number of connections to the cameras - so everything pipes through frigate.