Hi, I tried the WebRTC addon. It works really well, now we just need someone to make a lovelace card for it!
For now, would it be possible to imbed using an iFrame as described here?
Hi, I tried the WebRTC addon. It works really well, now we just need someone to make a lovelace card for it!
For now, would it be possible to imbed using an iFrame as described here?
The support for surveillance cameras is really a mess in HAā¦
I believe some of it is legacy, some of it internal limitations and if I may say so, less than ideal architecture decisions. Let me go thru some of these points:
choosing MJPEG as the internal format of choice.
While this was a standard format and easy to support on the front end, most recent cameras are either limiting its use (like not supporting it at full resolution) or not handling it properly due to CPU or bandwidth usage. RTSP is definitely the corner stone of camera imaging.
choosing HLS for streaming
HLS (or even HLS Low Latency) is great for streaming at a massive scale. A huge part of its design is to be CDN friendly. But it is very intensive in terms not only of processing on the backend but, and I think more importantly for HA, in terms on number or requests constantly generated to stay āup to dateā with the stream. Try running HA in debug mode and you will see a deluge of "Serving /api/hls/...
requests. It puts a large pressure on the connection pool that HA, being in python, is not the best at handling. And moving to HLS/LL will make things even worse.
WebRTC would be IMHO a much much better protocol to focus on rather than MJPEG with or without HLS.
For a more detailed argument for WebRTC support, you might be interested in RTC from RTSP.
Good news is that there is already a fairly mature async python webRTC library so there might not be a need for an add-on.
not having a video component
not sure why the picture entity got this ādualā personality. I would strongly prefer limiting the picture entity to āstaticā images, potentially with a settable refresh rate and having a separate āvideoā component, even if that one only support MJPEG for now.
Bottom line, I think itās time for a large refactoring of the camera codeā¦
I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.
Having a component that serves WebRTC streams from HA in the same way the stream component currently proxies the HLS through a websocket would be awesome. If I didnāt suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++
Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.
I guess HLS was a straightforward choice at the time, due to widespread client side support and the developer of the stream component was probably more familiar with it.
I agree. This was NOT a bad choice and I would NOT have been able to get even close to implementing it, especially in python. I am very grateful for all the work done by many contributors, my point was trying to explain why, in the case being made in this thread, it is fundamentally, IMHO, not the best architecture.
If I didnāt suck so much at Python I would try this myself. I wish you could write HA components in NodeJS or C++
Amen to that! For having spent more than 2 months putting together a fairly simple integration I hear you! First python project in 25+ years, I feel your pain
This said, many of the things we are trying to address here are more general code principle and architecture so you are more than welcome to participate.
Oh and not sure if I misunderstood what you were saying in your point 2 above, but MJPEG has nothing to do with HLS. The HLS proxy in HA simply repackages the RTSP stream directly, MJPEG is not involved in that specific pipeline. MJPEG is an alternative to HLS.
Youāre right, I was a bit fast (and likely confused) on the mixing of HLS and MJPEG. My apologiesā¦
The thing is though (and not to excuse me from anything), the pipeline is VERY confusing.
I saw a couple of comments in that thread asking for āworking examplesā and despite the work done by scstraus it is very hard to simply summarize the tradeoffs inherent to each configuration.
I believe this thread is a wake up call that we badly need a good solution to real time cameras in HA. I know Iāve seen before āHA is NOT a replacement for a DVRā. Thatās totally fine but being able to display feeds from local cameras should not be such a hassle.
Iām still trying to familiarize myself with the community process on the dev side of things and donāt want to hijack this treadā¦ I do believe tough than together, users as well as past and present developers, we can start a forward looking process to get HA better in this particular areaā¦
Suggestions and comments most welcomeā¦
Kind of a stupid question but how do you disable stream
?
Itās been part of default_config
since mid 2019ā¦
Do you also get rid of default_config
and if so, would you mind sharing what you replace it with?
Oh, and one more thingā¦
Realized today that a FFmpeg Camera
will still use HLS if stream
is loaded
Was thinking about proposing a change to that component so the SUPPORT_STREAM
property could be turned off (additional config property on FFmpeg, support_stream
with a default of true
for backward compatibility), allowing mpeg streams even in the presence of the stream
component.
Opinions? Comments?
Correct, I donāt use default_config:
I just have the relevant individual options listed in my config file.
Just list whichever of the options in here that you need. In general, most of them will be included by simply creating the appropriate item elsewhere in your configuration. ie: input_boolean
etc.
For me I use almost all of them except for zeroconf I think. stream:
is not listed there so are you sure it is included? Perhaps the docs are not up to date?
Yes, itās not in the docā¦ Not sure why but the code for default_config
loads the stream component if the av
library is present.
async def async_setup(hass, config):
"""Initialize default configuration."""
if av is None:
return True
return await async_setup_component(hass, "stream", config)```
I hear you regarding loading the various components by hand but I noticed the core team keeps adding new things to it to support the new features.
Really stream: should be an option that we should be able to toggle on the individual camera level. Itās silly to have it as a system-wide option.
Or even on a per view basisā¦
The funny thing is that there is already some code for it. I have not digged into the front end code yet but the picture entity makes a request for camera/stream
URL. The server then call def request_stream(hass, stream_source, *, fmt="hls", keepalive=False, options=None)
(notice the fmt
option there so there could be room for webrtcā¦). Is that calls fail (it will is stream wasnāt loaded) the view will revert to using the /api/camera_proxy_stream/{entity_id}
URL which delivers a MJPEG āstreamā.
So, unless Iām grossly mistaken, it shouldnāt be that hard to have the view have 2 āliveā options, one for HLS and one for MPEG.
On another, but related topic, there must be something fishy in the HLS/av stack that stop the stream after a while (still trying to nail the threshold but itās in the order of a few hours).
I may have some good news for you (and potentially others)
After digging way too long into the HA code I found a decent place to change some code.
Unfortunately itās in the haffmpeg library so Iām not sure how easy it will be to submit a patch and/or how long it will take to get it in a release.
But, with these changes to camera.py
I believe I have killed 2 nasty birds with one stone.
I still want to do some testing, documentation and clean up but if you want to test a āearly accessā Iāll be happy to walk you thru the install
Looks nice !
A HUGE HUGE warning for those tempted (that certainly includes me) to disable āstreamāā¦
I found out the hard way that Appleās webkit has a bug that will kill you (or rather your server/cameras). It affects not only Safari but also the home assistant app on iOS. The bug was reported in 2006 (!!!) so I doubt it will ever get fixed.
Because of this bug, any action (like changing tabs) that hides and then redisplays a camera feed will open a new connection to the server WITHOUT closing the existing ones. With the default Home Assistant you will fork yet another ffmpeg
process (ended up having 54 pretty quicklyā¦)
Even with my changes, the combined output bandwidth becomes eventually unbearable for the server.
Bottom line, AVOID Mjpeg if you are a Apple user!!!
So I am in a bit of a conundrum hereā¦
Though I am quite confident my new code for CameraMjpeg
works (no issues after days on my dev machine), being all on Apple/iOS, I can not really test and validate it in my āproductionā environment
I see 3 options:
ffmpeg2
custom camera component with the exact same configuration as the ffmpeg
camera and bring the new code to it. All youād have to do to try it (beyond installing it thru HACS of course) will be to change ffmpeg
to ffmpeg2
in your config fileā¦please let me know
Iāve been noticing similar things to this lately too. I used to open many tabs with homeassistant in it to compare things or fire services while watching the results, etc. But I realized that recent versions of hass since a lot of the UI changes didnāt like this and would freeze up when I did it, so i stopped. Since I stopped doing that, my cameras have been very quick to load and quite reliable compared to previously. I suspect I was also causing this to happen sometimes.
Anyhow, to answer your question, ffmpeg cameras without stream are working reasonably well for me, but I will always take something better, so would be happy to test an mjpeg2 custom component (would be great if it was in HACS). That seems like the logical first step, but thereās no reason not to do that and try to get it into core if it works better.
I would like to see how things work with only 1 FFMPEG instance, because I still suspect thereās funny business happening with zombie ffmpeg processes, etcā¦ I would definitely feel more secure if I knew there was just one instance.
Not sure I provided you much insight there as I kind of said yes to all 3 options, but thatās where I am.
Anyhow, to answer your question, ffmpeg cameras without stream are working reasonably well for me, but I will always take something better, so would be happy to test an mjpeg2 custom component (would be great if it was in HACS). That seems like the logical first step, but thereās no reason not to do that and try to get it into core if it works better.
Sounds good. Wonāt be able to work on it for the next few days but Iāll get on it soon.
I like the idea of a custom integration.
I donāt think Iām gonna bother with the whole ConfigFlow
for this but I might be able to add a few sensors:
How does that sound? Anything else I should think about?
Sensors would be nice, but the feature Iāve always wanted is what exactly went wrong when it failed to display the stream. Make verbose mode quite verbose so that we can actually figure out what went wrong when it doesnāt work and potentially fix it.
I hear you but based on what Iāve seen all the failures to display (apart from image corruption/āsmearingā) are due to the a combination of the browser and the front end code. Firefox seems to behave the best, Safari as I previously described it a catastrophe and Chrome is in general ok though you sometimes have to reload the page.
Keep also in mind that browsers limit the number of open connections to a given domain to 6. You are at the limit with 4 cameras displayed. Add a fifth one for example and youāll start having behavior problems.
Iāll definitely put as much tracing as possible on the server side but be aware that might not always explain what the UI ends up rendering.
Hope this makes sense.
Yes thatās useful information for sure. Explains some problems. I am a Firefox user. Generally since I stopped opening multiple pages Iām doing quite well. It seems like I still have the problem even when Iām not viewing the tab with the camera, but I will experiment with how many tabs I can open if I donāt open the camera tabs. It might help me when Iām testing.
Recently got a cheap Tapo C200 camera, no lag in the native android app but terrible viewed in HA which was a shame. Found disabling āstreamā makes it actually usable but on my Pi4 that pops the CPU usage up from about 13% to 50 - 60% when viewing so not really a proper solution. Which led me to this thread!
I couldnāt see anything in regards to this but is there a performance gain to having cameras into something like āmotionā (just for general NVR capabilities anyway) but then to have the āmotion integrationā in HA?
I donāt understand what you mean by that. Is āmotionā some software you want to use or what?