I tried all the camera platforms so you don't have to

This idea has been rattling around in my head for a while now: https://github.com/blakeblackshear/frigate/issues/338

2 Likes

I liked the recap idea that you had assigned yourself a year ago to do something like this :slight_smile:

You ever mess with cameras that depend on CloudEdge? I use a bunch of Zumimall wireless IP battery cameras and while they’re great on their own I’d love to integrate them with HA if at all possible.

I’ve tried the new Frigate custom component with RTMP, but I’ve also ~10 sec of lag in HA… I’m using the stream: component as well.

EDIT: @scstraus maybe something that could help with the camera lag in general: Realtime camera streaming without any delay - RTSP2WebRTC

1 Like

Yes I don’t think there’s any way to avoid the delay when using the stream: component. It’s simply a byproduct of the protocol.

The WebRTC component looks extremely promising! I’m going to give it a try. Thank you for that!

@scstraus you can ask me any questions about WebRTC. Your research is very good. I’m also hate lags and have been looking for a solution for a very long time :slight_smile:

2 Likes

Hi, I tried the WebRTC addon. It works really well, now we just need someone to make a lovelace card for it!

For now, would it be possible to imbed using an iFrame as described here?

1 Like

The support for surveillance cameras is really a mess in HA… :frowning:

I believe some of it is legacy, some of it internal limitations and if I may say so, less than ideal architecture decisions. Let me go thru some of these points:

  • choosing MJPEG as the internal format of choice.
    While this was a standard format and easy to support on the front end, most recent cameras are either limiting its use (like not supporting it at full resolution) or not handling it properly due to CPU or bandwidth usage. RTSP is definitely the corner stone of camera imaging.

  • choosing HLS for streaming
    HLS (or even HLS Low Latency) is great for streaming at a massive scale. A huge part of its design is to be CDN friendly. But it is very intensive in terms not only of processing on the backend but, and I think more importantly for HA, in terms on number or requests constantly generated to stay “up to date” with the stream. Try running HA in debug mode and you will see a deluge of "Serving /api/hls/... requests. It puts a large pressure on the connection pool that HA, being in python, is not the best at handling. And moving to HLS/LL will make things even worse.
    WebRTC would be IMHO a much much better protocol to focus on rather than MJPEG with or without HLS.
    For a more detailed argument for WebRTC support, you might be interested in RTC from RTSP.
    Good news is that there is already a fairly mature async python webRTC library so there might not be a need for an add-on.

  • not having a video component
    not sure why the picture entity got this “dual” personality. I would strongly prefer limiting the picture entity to “static” images, potentially with a settable refresh rate and having a separate “video” component, even if that one only support MJPEG for now.

Bottom line, I think it’s time for a large refactoring of the camera code…

5 Likes

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

Having a component that serves WebRTC streams from HA in the same way the stream component currently proxies the HLS through a websocket would be awesome. If I didn’t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

1 Like

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

I agree. This was NOT a bad choice and I would NOT have been able to get even close to implementing it, especially in python. I am very grateful for all the work done by many contributors, my point was trying to explain why, in the case being made in this thread, it is fundamentally, IMHO, not the best architecture.

If I didn’t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Amen to that! For having spent more than 2 months putting together a fairly simple integration I hear you! First python project in 25+ years, I feel your pain :slight_smile:
This said, many of the things we are trying to address here are more general code principle and architecture so you are more than welcome to participate.

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

You’re right, I was a bit fast (and likely confused) on the mixing of HLS and MJPEG. My apologies…
The thing is though (and not to excuse me from anything), the pipeline is VERY confusing.
I saw a couple of comments in that thread asking for “working examples” and despite the work done by scstraus it is very hard to simply summarize the tradeoffs inherent to each configuration.

I believe this thread is a wake up call that we badly need a good solution to real time cameras in HA. I know I’ve seen before “HA is NOT a replacement for a DVR”. That’s totally fine but being able to display feeds from local cameras should not be such a hassle.

I’m still trying to familiarize myself with the community process on the dev side of things and don’t want to hijack this tread… I do believe tough than together, users as well as past and present developers, we can start a forward looking process to get HA better in this particular area…

Suggestions and comments most welcome…

1 Like

Kind of a stupid question but how do you disable stream?
It’s been part of default_config since mid 2019…
Do you also get rid of default_config and if so, would you mind sharing what you replace it with?

Oh, and one more thing…
Realized today that a FFmpeg Camera will still use HLS if stream is loaded :frowning:

Was thinking about proposing a change to that component so the SUPPORT_STREAM property could be turned off (additional config property on FFmpeg, support_stream with a default of true for backward compatibility), allowing mpeg streams even in the presence of the stream component.

Opinions? Comments?

1 Like

Correct, I don’t use default_config: I just have the relevant individual options listed in my config file.

Just list whichever of the options in here that you need. In general, most of them will be included by simply creating the appropriate item elsewhere in your configuration. ie: input_boolean etc.

For me I use almost all of them except for zeroconf I think. stream: is not listed there so are you sure it is included? Perhaps the docs are not up to date?

Yes, it’s not in the doc… Not sure why but the code for default_config loads the stream component if the av library is present.

async def async_setup(hass, config):
    """Initialize default configuration."""
    if av is None:
        return True

    return await async_setup_component(hass, "stream", config)```

I hear you regarding loading the various components by hand but I noticed the core team keeps adding new things to it to support the new features.


It was kind of nice to get these updates automatically…

1 Like

Really stream: should be an option that we should be able to toggle on the individual camera level. It’s silly to have it as a system-wide option.

Or even on a per view basis…
The funny thing is that there is already some code for it. I have not digged into the front end code yet but the picture entity makes a request for camera/stream URL. The server then call def request_stream(hass, stream_source, *, fmt="hls", keepalive=False, options=None) (notice the fmt option there so there could be room for webrtc…). Is that calls fail (it will is stream wasn’t loaded) the view will revert to using the /api/camera_proxy_stream/{entity_id} URL which delivers a MJPEG “stream”.
So, unless I’m grossly mistaken, it shouldn’t be that hard to have the view have 2 “live” options, one for HLS and one for MPEG.
On another, but related topic, there must be something fishy in the HLS/av stack that stop the stream after a while (still trying to nail the threshold but it’s in the order of a few hours).

1 Like

I may have some good news for you (and potentially others)

After digging way too long into the HA code I found a decent place to change some code.
Unfortunately it’s in the haffmpeg library so I’m not sure how easy it will be to submit a patch and/or how long it will take to get it in a release.

But, with these changes to camera.py I believe I have killed 2 nasty birds with one stone.

  1. only ONE ffmpeg process per camera (assuming the ffmpeg options are the same) no matter how many views or sessions.
  2. no more “shearing” effects

I still want to do some testing, documentation and clean up but if you want to test a “early access” I’ll be happy to walk you thru the install

3 Likes

Looks nice !

A HUGE HUGE warning for those tempted (that certainly includes me) to disable ‘stream’…

I found out the hard way that Apple’s webkit has a bug that will kill you (or rather your server/cameras). It affects not only Safari but also the home assistant app on iOS. The bug was reported in 2006 (!!!) so I doubt it will ever get fixed.

Because of this bug, any action (like changing tabs) that hides and then redisplays a camera feed will open a new connection to the server WITHOUT closing the existing ones. With the default Home Assistant you will fork yet another ffmpeg process (ended up having 54 pretty quickly…)
Even with my changes, the combined output bandwidth becomes eventually unbearable for the server.

Bottom line, AVOID Mjpeg if you are a Apple user!!!

So I am in a bit of a conundrum here…

Though I am quite confident my new code for CameraMjpeg works (no issues after days on my dev machine), being all on Apple/iOS, I can not really test and validate it in my “production” environment :frowning:

I see 3 options:

  1. do nothing… Mjpeg seems to work ok enough for some of you… even with the frequent image corruption
  2. still try to get my changes into the Home Assistant core… Not sure when if/this will happen…
  3. make a ffmpeg2 custom camera component with the exact same configuration as the ffmpeg camera and bring the new code to it. All you’d have to do to try it (beyond installing it thru HACS of course) will be to change ffmpeg to ffmpeg2 in your config file…
    That kind of more work for me but also the safer route.

please let me know

2 Likes

I’ve been noticing similar things to this lately too. I used to open many tabs with homeassistant in it to compare things or fire services while watching the results, etc. But I realized that recent versions of hass since a lot of the UI changes didn’t like this and would freeze up when I did it, so i stopped. Since I stopped doing that, my cameras have been very quick to load and quite reliable compared to previously. I suspect I was also causing this to happen sometimes.

Anyhow, to answer your question, ffmpeg cameras without stream are working reasonably well for me, but I will always take something better, so would be happy to test an mjpeg2 custom component (would be great if it was in HACS). That seems like the logical first step, but there’s no reason not to do that and try to get it into core if it works better.

I would like to see how things work with only 1 FFMPEG instance, because I still suspect there’s funny business happening with zombie ffmpeg processes, etc… I would definitely feel more secure if I knew there was just one instance.

Not sure I provided you much insight there as I kind of said yes to all 3 options, but that’s where I am.