I tried all the camera platforms so you don't have to

No, despite being one of the first frigate users, I have not yet had a chance to upgrade. Previous releases did not really encourage you to use the streams they created as full time cameras… It looks like that might have changed, and so might be a candidate for replacing my RTSP proxy.

I would test the hell out of this :grinning:. What’s the latency like on RTMP now? Does it also proxy the stream so that it’s just keeping one connection open to the camera even if I have 5 devices streaming the camera from frigate in the frontend?

The latency for RTMP from frigate should be similar to pulling it directly from the camera. Frigate is taking the camera stream and immediately rebroadcasting over RTMP. FFmpeg makes one connection to the camera and can pipe the output to multiple locations. A single connection can be used for 24/7 recording, clips, detection, and rtmp. From there you can connect multiple clients to Frigate’s RTMP endpoint. Since it is using nginx’s rtmp module, I expect it can handle many simultaneous connections. All of that can be done with a single connection to the camera itself.

1 Like

This idea has been rattling around in my head for a while now: https://github.com/blakeblackshear/frigate/issues/338

2 Likes

I liked the recap idea that you had assigned yourself a year ago to do something like this :slight_smile:

You ever mess with cameras that depend on CloudEdge? I use a bunch of Zumimall wireless IP battery cameras and while they’re great on their own I’d love to integrate them with HA if at all possible.

I’ve tried the new Frigate custom component with RTMP, but I’ve also ~10 sec of lag in HA… I’m using the stream: component as well.

EDIT: @scstraus maybe something that could help with the camera lag in general: Realtime camera streaming without any delay - RTSP2WebRTC

1 Like

Yes I don’t think there’s any way to avoid the delay when using the stream: component. It’s simply a byproduct of the protocol.

The WebRTC component looks extremely promising! I’m going to give it a try. Thank you for that!

@scstraus you can ask me any questions about WebRTC. Your research is very good. I’m also hate lags and have been looking for a solution for a very long time :slight_smile:

1 Like

Hi, I tried the WebRTC addon. It works really well, now we just need someone to make a lovelace card for it!

For now, would it be possible to imbed using an iFrame as described here?

1 Like

The support for surveillance cameras is really a mess in HA… :frowning:

I believe some of it is legacy, some of it internal limitations and if I may say so, less than ideal architecture decisions. Let me go thru some of these points:

  • choosing MJPEG as the internal format of choice.
    While this was a standard format and easy to support on the front end, most recent cameras are either limiting its use (like not supporting it at full resolution) or not handling it properly due to CPU or bandwidth usage. RTSP is definitely the corner stone of camera imaging.

  • choosing HLS for streaming
    HLS (or even HLS Low Latency) is great for streaming at a massive scale. A huge part of its design is to be CDN friendly. But it is very intensive in terms not only of processing on the backend but, and I think more importantly for HA, in terms on number or requests constantly generated to stay “up to date” with the stream. Try running HA in debug mode and you will see a deluge of "Serving /api/hls/... requests. It puts a large pressure on the connection pool that HA, being in python, is not the best at handling. And moving to HLS/LL will make things even worse.
    WebRTC would be IMHO a much much better protocol to focus on rather than MJPEG with or without HLS.
    For a more detailed argument for WebRTC support, you might be interested in RTC from RTSP.
    Good news is that there is already a fairly mature async python webRTC library so there might not be a need for an add-on.

  • not having a video component
    not sure why the picture entity got this “dual” personality. I would strongly prefer limiting the picture entity to “static” images, potentially with a settable refresh rate and having a separate “video” component, even if that one only support MJPEG for now.

Bottom line, I think it’s time for a large refactoring of the camera code…

4 Likes

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

Having a component that serves WebRTC streams from HA in the same way the stream component currently proxies the HLS through a websocket would be awesome. If I didn’t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

1 Like

I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.

I agree. This was NOT a bad choice and I would NOT have been able to get even close to implementing it, especially in python. I am very grateful for all the work done by many contributors, my point was trying to explain why, in the case being made in this thread, it is fundamentally, IMHO, not the best architecture.

If I didn’t suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++ :slightly_smiling_face:

Amen to that! For having spent more than 2 months putting together a fairly simple integration I hear you! First python project in 25+ years, I feel your pain :slight_smile:
This said, many of the things we are trying to address here are more general code principle and architecture so you are more than welcome to participate.

Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.

You’re right, I was a bit fast (and likely confused) on the mixing of HLS and MJPEG. My apologies…
The thing is though (and not to excuse me from anything), the pipeline is VERY confusing.
I saw a couple of comments in that thread asking for “working examples” and despite the work done by scstraus it is very hard to simply summarize the tradeoffs inherent to each configuration.

I believe this thread is a wake up call that we badly need a good solution to real time cameras in HA. I know I’ve seen before “HA is NOT a replacement for a DVR”. That’s totally fine but being able to display feeds from local cameras should not be such a hassle.

I’m still trying to familiarize myself with the community process on the dev side of things and don’t want to hijack this tread… I do believe tough than together, users as well as past and present developers, we can start a forward looking process to get HA better in this particular area…

Suggestions and comments most welcome…

1 Like

Kind of a stupid question but how do you disable stream?
It’s been part of default_config since mid 2019…
Do you also get rid of default_config and if so, would you mind sharing what you replace it with?

Oh, and one more thing…
Realized today that a FFmpeg Camera will still use HLS if stream is loaded :frowning:

Was thinking about proposing a change to that component so the SUPPORT_STREAM property could be turned off (additional config property on FFmpeg, support_stream with a default of true for backward compatibility), allowing mpeg streams even in the presence of the stream component.

Opinions? Comments?

1 Like

Correct, I don’t use default_config: I just have the relevant individual options listed in my config file.

Just list whichever of the options in here that you need. In general, most of them will be included by simply creating the appropriate item elsewhere in your configuration. ie: input_boolean etc.

For me I use almost all of them except for zeroconf I think. stream: is not listed there so are you sure it is included? Perhaps the docs are not up to date?

Yes, it’s not in the doc… Not sure why but the code for default_config loads the stream component if the av library is present.

async def async_setup(hass, config):
    """Initialize default configuration."""
    if av is None:
        return True

    return await async_setup_component(hass, "stream", config)```

I hear you regarding loading the various components by hand but I noticed the core team keeps adding new things to it to support the new features.


It was kind of nice to get these updates automatically…

1 Like

Really stream: should be an option that we should be able to toggle on the individual camera level. It’s silly to have it as a system-wide option.

Or even on a per view basis…
The funny thing is that there is already some code for it. I have not digged into the front end code yet but the picture entity makes a request for camera/stream URL. The server then call def request_stream(hass, stream_source, *, fmt="hls", keepalive=False, options=None) (notice the fmt option there so there could be room for webrtc…). Is that calls fail (it will is stream wasn’t loaded) the view will revert to using the /api/camera_proxy_stream/{entity_id} URL which delivers a MJPEG “stream”.
So, unless I’m grossly mistaken, it shouldn’t be that hard to have the view have 2 “live” options, one for HLS and one for MPEG.
On another, but related topic, there must be something fishy in the HLS/av stack that stop the stream after a while (still trying to nail the threshold but it’s in the order of a few hours).

1 Like

I may have some good news for you (and potentially others)

After digging way too long into the HA code I found a decent place to change some code.
Unfortunately it’s in the haffmpeg library so I’m not sure how easy it will be to submit a patch and/or how long it will take to get it in a release.

But, with these changes to camera.py I believe I have killed 2 nasty birds with one stone.

  1. only ONE ffmpeg process per camera (assuming the ffmpeg options are the same) no matter how many views or sessions.
  2. no more “shearing” effects

I still want to do some testing, documentation and clean up but if you want to test a “early access” I’ll be happy to walk you thru the install

3 Likes

Looks nice !