I tried all the camera platforms so you don't have to

Hi, I tried the WebRTC addon. It works really well, now we just need someone to make a lovelace card for it!

For now, would it be possible to imbed using an iFrame as described here?

1 Like

The support for surveillance cameras is really a mess in HAā€¦ :frowning:

I believe some of it is legacy, some of it internal limitations and if I may say so, less than ideal architecture decisions. Let me go thru some of these points:

  • choosing MJPEG as the internal format of choice.
    While this was a standard format and easy to support on the front end, most recent cameras are either limiting its use (like not supporting it at full resolution) or not handling it properly due to CPU or bandwidth usage. RTSP is definitely the corner stone of camera imaging.

  • choosing HLS for streaming
    HLS (or even HLS Low Latency) is great for streaming at a massive scale. A huge part of its design is to be CDN friendly. But it is very intensive in terms not only of processing on the backend but, and I think more importantly for HA, in terms on number or requests constantly generated to stay ā€œup to dateā€ with the stream. Try running HA in debug mode and you will see a deluge of "Serving /api/hls/... requests. It puts a large pressure on the connection pool that HA, being in python, is not the best at handling. And moving to HLS/LL will make things even worse.
    WebRTC would be IMHO a much much better protocol to focus on rather than MJPEG with or without HLS.
    For a more detailed argument for WebRTC support, you might be interested in RTC from RTSP.
    Good news is that there is already a fairly mature async python webRTC library so there might not be a need for an add-on.

  • not having a video component
    not sure why the picture entity got this ā€œdualā€ personality. I would strongly prefer limiting the picture entity to ā€œstaticā€ images, potentially with a settable refresh rate and having a separate ā€œvideoā€ component, even if that one only support MJPEG for now.

Bottom line, I think itā€™s time for a large refactoring of the camera codeā€¦

5 Likes

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

Having a component that serves WebRTC streams from HA in the same way the stream component currently proxies the HLS through a websocket would be awesome. If I didnā€™t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

1 Like

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

I agree. This was NOT a bad choice and I would NOT have been able to get even close to implementing it, especially in python. I am very grateful for all the work done by many contributors, my point was trying to explain why, in the case being made in this thread, it is fundamentally, IMHO, not the best architecture.

If I didnā€™t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Amen to that! For having spent more than 2 months putting together a fairly simple integration I hear you! First python project in 25+ years, I feel your pain :slight_smile:
This said, many of the things we are trying to address here are more general code principle and architecture so you are more than welcome to participate.

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

Youā€™re right, I was a bit fast (and likely confused) on the mixing of HLS and MJPEG. My apologiesā€¦
The thing is though (and not to excuse me from anything), the pipeline is VERY confusing.
I saw a couple of comments in that thread asking for ā€œworking examplesā€ and despite the work done by scstraus it is very hard to simply summarize the tradeoffs inherent to each configuration.

I believe this thread is a wake up call that we badly need a good solution to real time cameras in HA. I know Iā€™ve seen before ā€œHA is NOT a replacement for a DVRā€. Thatā€™s totally fine but being able to display feeds from local cameras should not be such a hassle.

Iā€™m still trying to familiarize myself with the community process on the dev side of things and donā€™t want to hijack this treadā€¦ I do believe tough than together, users as well as past and present developers, we can start a forward looking process to get HA better in this particular areaā€¦

Suggestions and comments most welcomeā€¦

1 Like

Kind of a stupid question but how do you disable stream?
Itā€™s been part of default_config since mid 2019ā€¦
Do you also get rid of default_config and if so, would you mind sharing what you replace it with?

Oh, and one more thingā€¦
Realized today that a FFmpeg Camera will still use HLS if stream is loaded :frowning:

Was thinking about proposing a change to that component so the SUPPORT_STREAM property could be turned off (additional config property on FFmpeg, support_stream with a default of true for backward compatibility), allowing mpeg streams even in the presence of the stream component.

Opinions? Comments?

1 Like

Correct, I donā€™t use default_config: I just have the relevant individual options listed in my config file.

Just list whichever of the options in here that you need. In general, most of them will be included by simply creating the appropriate item elsewhere in your configuration. ie: input_boolean etc.

For me I use almost all of them except for zeroconf I think. stream: is not listed there so are you sure it is included? Perhaps the docs are not up to date?

Yes, itā€™s not in the docā€¦ Not sure why but the code for default_config loads the stream component if the av library is present.

async def async_setup(hass, config):
    """Initialize default configuration."""
    if av is None:
        return True

    return await async_setup_component(hass, "stream", config)```

I hear you regarding loading the various components by hand but I noticed the core team keeps adding new things to it to support the new features.


It was kind of nice to get these updates automaticallyā€¦

1 Like

Really stream: should be an option that we should be able to toggle on the individual camera level. Itā€™s silly to have it as a system-wide option.

Or even on a per view basisā€¦
The funny thing is that there is already some code for it. I have not digged into the front end code yet but the picture entity makes a request for camera/stream URL. The server then call def request_stream(hass, stream_source, *, fmt="hls", keepalive=False, options=None) (notice the fmt option there so there could be room for webrtcā€¦). Is that calls fail (it will is stream wasnā€™t loaded) the view will revert to using the /api/camera_proxy_stream/{entity_id} URL which delivers a MJPEG ā€œstreamā€.
So, unless Iā€™m grossly mistaken, it shouldnā€™t be that hard to have the view have 2 ā€œliveā€ options, one for HLS and one for MPEG.
On another, but related topic, there must be something fishy in the HLS/av stack that stop the stream after a while (still trying to nail the threshold but itā€™s in the order of a few hours).

1 Like

I may have some good news for you (and potentially others)

After digging way too long into the HA code I found a decent place to change some code.
Unfortunately itā€™s in the haffmpeg library so Iā€™m not sure how easy it will be to submit a patch and/or how long it will take to get it in a release.

But, with these changes to camera.py I believe I have killed 2 nasty birds with one stone.

  1. only ONE ffmpeg process per camera (assuming the ffmpeg options are the same) no matter how many views or sessions.
  2. no more ā€œshearingā€ effects

I still want to do some testing, documentation and clean up but if you want to test a ā€œearly accessā€ Iā€™ll be happy to walk you thru the install

3 Likes

Looks nice !

A HUGE HUGE warning for those tempted (that certainly includes me) to disable ā€˜streamā€™ā€¦

I found out the hard way that Appleā€™s webkit has a bug that will kill you (or rather your server/cameras). It affects not only Safari but also the home assistant app on iOS. The bug was reported in 2006 (!!!) so I doubt it will ever get fixed.

Because of this bug, any action (like changing tabs) that hides and then redisplays a camera feed will open a new connection to the server WITHOUT closing the existing ones. With the default Home Assistant you will fork yet another ffmpeg process (ended up having 54 pretty quicklyā€¦)
Even with my changes, the combined output bandwidth becomes eventually unbearable for the server.

Bottom line, AVOID Mjpeg if you are a Apple user!!!

So I am in a bit of a conundrum hereā€¦

Though I am quite confident my new code for CameraMjpeg works (no issues after days on my dev machine), being all on Apple/iOS, I can not really test and validate it in my ā€œproductionā€ environment :frowning:

I see 3 options:

  1. do nothingā€¦ Mjpeg seems to work ok enough for some of youā€¦ even with the frequent image corruption
  2. still try to get my changes into the Home Assistant coreā€¦ Not sure when if/this will happenā€¦
  3. make a ffmpeg2 custom camera component with the exact same configuration as the ffmpeg camera and bring the new code to it. All youā€™d have to do to try it (beyond installing it thru HACS of course) will be to change ffmpeg to ffmpeg2 in your config fileā€¦
    That kind of more work for me but also the safer route.

please let me know

2 Likes

Iā€™ve been noticing similar things to this lately too. I used to open many tabs with homeassistant in it to compare things or fire services while watching the results, etc. But I realized that recent versions of hass since a lot of the UI changes didnā€™t like this and would freeze up when I did it, so i stopped. Since I stopped doing that, my cameras have been very quick to load and quite reliable compared to previously. I suspect I was also causing this to happen sometimes.

Anyhow, to answer your question, ffmpeg cameras without stream are working reasonably well for me, but I will always take something better, so would be happy to test an mjpeg2 custom component (would be great if it was in HACS). That seems like the logical first step, but thereā€™s no reason not to do that and try to get it into core if it works better.

I would like to see how things work with only 1 FFMPEG instance, because I still suspect thereā€™s funny business happening with zombie ffmpeg processes, etcā€¦ I would definitely feel more secure if I knew there was just one instance.

Not sure I provided you much insight there as I kind of said yes to all 3 options, but thatā€™s where I am.

Anyhow, to answer your question, ffmpeg cameras without stream are working reasonably well for me, but I will always take something better, so would be happy to test an mjpeg2 custom component (would be great if it was in HACS). That seems like the logical first step, but thereā€™s no reason not to do that and try to get it into core if it works better.

Sounds good. Wonā€™t be able to work on it for the next few days but Iā€™ll get on it soon.

I like the idea of a custom integration.

I donā€™t think Iā€™m gonna bother with the whole ConfigFlow for this but I might be able to add a few sensors:

  • number of ffmpeg processes
  • number of open streams
  • maybe: average size of a frame
  • maybe: average frame rate across all clients

How does that sound? Anything else I should think about?

1 Like

Sensors would be nice, but the feature Iā€™ve always wanted is what exactly went wrong when it failed to display the stream. Make verbose mode quite verbose so that we can actually figure out what went wrong when it doesnā€™t work and potentially fix it.

I hear you but based on what Iā€™ve seen all the failures to display (apart from image corruption/ā€œsmearingā€) are due to the a combination of the browser and the front end code. Firefox seems to behave the best, Safari as I previously described it a catastrophe and Chrome is in general ok though you sometimes have to reload the page.
Keep also in mind that browsers limit the number of open connections to a given domain to 6. You are at the limit with 4 cameras displayed. Add a fifth one for example and youā€™ll start having behavior problems.
Iā€™ll definitely put as much tracing as possible on the server side but be aware that might not always explain what the UI ends up rendering.
Hope this makes sense.

1 Like

Yes thatā€™s useful information for sure. Explains some problems. I am a Firefox user. Generally since I stopped opening multiple pages Iā€™m doing quite well. It seems like I still have the problem even when Iā€™m not viewing the tab with the camera, but I will experiment with how many tabs I can open if I donā€™t open the camera tabs. It might help me when Iā€™m testing.

Recently got a cheap Tapo C200 camera, no lag in the native android app but terrible viewed in HA which was a shame. Found disabling ā€œstreamā€ makes it actually usable but on my Pi4 that pops the CPU usage up from about 13% to 50 - 60% when viewing so not really a proper solution. Which led me to this thread!

I couldnā€™t see anything in regards to this but is there a performance gain to having cameras into something like ā€œmotionā€ (just for general NVR capabilities anyway) but then to have the ā€œmotion integrationā€ in HA?

I donā€™t understand what you mean by that. Is ā€œmotionā€ some software you want to use or what?