Why are IP cameras so slow in frontend?!

To be clear, I’m running MPEG4 on old hardware and I do not have any delay issues.

EDIT: Just tried holding a phone showing the stream which has a clock overlaid on it in front of the camera whilst watching the stream on my laptop, and the delay is slightly over a second whilst running two simultaneous streams. Tried the same thing with the IP Webcam web front end, and the delay is about 3s, so HA is faster.

1 Like

This always seems to have been an issue. I ended up setting up ffserver on a linux server and streaming my rtsp streams using ffserver/ffmpeg as mjpeg format. I had to seriously lower the size and resolution of the streams as well as lower the fps to 10 or less. This works great, however, it means that I’m always playing the video stream from each camera 24/7/365. I have over 20 cameras. This is a waste of resources as it’s highly CPU intensive an it causes a lot of additional I/O on the dvr disks as well as the Linux disk drive. I really wish my dvr and cameras supported mjpeg natively. This would make life so much easier.

Okay, holy crap, I finally figured it out and it was so damned simple.

I was playing with MJPG again based on your latest suggestions but only getting 1 frame every 10 seconds in my lovelace card and it wouldn’t open the detail view due to the fact that the streaming component was enabled and apparently that only works with h.264… So I commented out

stream:

in my lovelace config and when I restarted, all my h.264 cameras were near real time… The framerate in the frontend is only maybe .5-1 FPS, but there’s almost no lag (maybe 1-2 seconds), which is much more useful than the higher framerate… That’s all it was! Just turn of stream… Combine that with

camera_view: live

in the lovelace card, and everything works well.

Thanks @Troon , @rafale77, @Coolie1101 or bearing with me, you’ve been a great help in giving me hope that this was actually solvable. Now I feel confident to move forward with using hass for video doorbells and other things. @pergola.fabio, give it a try.

Argh, okay now I just realized why I stopped doing things this way… On my fire tablets and iOS, the cameras don’t really show up properly… They will show a few frames, then disappear and then come back… Damnit… I think I went through this before and settled on the stream component because they would at least display that way… What a bummer. Well I guess it was worth a try to see if that had gotten fixed.

Stream: is needed for casting cameras :frowning:

For my case it would be very useful if I could have the stream component only apply to certain cameras in my config while having others not use it (to get rid of the delay). If you feel the same way, please vote for my feature request here

The stream component is really one of the parts in HA that has always been very problematic for me, due to the way it works. It converts the RTSP stream into HLS on the fly. Because it needs to support low power embedded systems, it can’t transcode the stream. So it has to cut at I-Frame boundaries. It will always buffer 3 HLS segments, this is hardcoded. If you set the camera to encode an I-Frame every second, you will get a minimum delay of 3 seconds (3 segments at I-Frame boundary, with an I-Frame every second). The HLS decoder used for the frontend will buffer again, which will likely add more delay.

I don’t use Lovelace, but an external UI (Tileboard), so I get the streams over the HA websocket API and decode them directly with HLS.js, where I can manually change the buffer size in the code. With all that I get around 4 second delays on my Hikvisions. That’s pretty much the minimum latency you’ll be able to realistically get with the unmodified HA stream component. That is OK for me, as realtime recording is done by an external NVR anyway.

I haven’t tried to change the hardcoded 3 segment buffer in the stream component. I vaguely remember from the HLS specs that 3 segments minimum are required by the standard, but possibly it will work with less. Native camera apps don’t have this problem as they don’t go the RTSP->HLS->display route, but decode and display the RTSP stream directly (or use a proprietary streaming format).

3 Likes

Hi all, as a thank you to all of you who inspired me to keep dicking around with my camera config, I have done a long write up about everything I learned about the strengths and limitations of ~10 different camera configs I ran. I think at this point I have tried pretty much every combination of camera components with and without stream. You can see my findings here:

https://community.home-assistant.io/t/i-tried-all-the-camera-platforms-so-you-dont-have-to

2 Likes

Also, vote for my month of WTH post if you are annoyed with your cameras not loading and playing correctly: Why the heck don't my cameras consistently load up when I load my lovelace dashboard?

2 Likes

This file has been truncated. [show original ] …

Hi.

Could you share again all code? I got 404 when I tried to get “show original”.

tks

Opps, I should have updated the link when I moved my repo:

1 Like

thanks dude. you’re awesome.

you didnt give repourl
you gave core url

Anyone know why HA chose to use HLS instead of RTSP?

If I convert the RTSP stream to HLS at a faster rate that HA is able to do so, would you expect that I would reduce the lag by that same rate?

Do the camera events that trigger HA sensors lag as well or would they precede the feed by the time of the lag?

RTSP is not suitable for direct viewing in a web browser while HLS is. That’s why it’s converted first.

Yes and no. The problem is not the conversion. There is no actual transcoding of the video stream done. The RTSP h264 packets are simply extracted and repackaged into a different container. The CPU load to do that is insignificant, even in a slow language like Python. The lag problem is inherent to the HLS stream format itself and the buffering that it incurs. So yes, if you create the HLS faster than HA can do it (say for example using libav in C++ to do it), then the time saved will reduce the lag. But don’t expect much of an improvement, because that’s not the bottleneck. You will also lose the advantage of having the data proxied over HA, unless you write your own integration.

Native camera events are completely separate and not part of the stream integration.

Very interesting information. Thanks a lot!

Just updated my repo above, jumping from 0.112.5 all the way to 118.3. Apparently had no breaking changes so it is pretty amazing. It appears that as part of all the speed optimizations/improvements of 0.113 have changed the way HA calculates scan_intervals for image processing and either the timing is more accurate now or HA is now sucking up a lot more resources with cameras as I am getting warnings that my inferences are taking longer than the intervals with the same previous settings. I used to get these when I had much shorter intervals and have optimized the different camera streams and intervals to prevent this. I think it is rather the former as I am seeing my CPU and GPU utilization both double.

Did you try out - RTSP to WebRTC:

2 Likes

Guys I have about 30 seconds delay in IPCAM video, what it could be?

Camera config:

  - platform: generic
    name: labcam
    stream_source: rtsp://login:[email protected]:554/onvif1
    still_image_url: rtsp://login:[email protected]:554/onvif1
    rtsp_transport: udp
    authentication: digest
    username: login
    password: pass
    verify_ssl: False
    framerate: 5

lovelace:

type: picture-elements
title: LAB PTZ CAMERA
camera_image: camera.labcam
elements:
  - type: state-icon
    tap_action:
      action: more-info
    entity: camera.labcam
    icon: mdi:arrow-expand-all
    style:
      top: 5%
      right: 5%
      color: white
      opacity: 0.5
      transform: ' scale (1.5, 1.5) '
camera_view: live

configuration.yaml:

stream:
  ll_hls: true
  part_duration: 0.75
  segment_duration: 6
ffmpeg:
cloud:
media_source:
ptz_camera:

I have round about 5 seconds delay (C210 camera)… and it sucks! :-((