Why are IP cameras so slow in frontend?!

DId you fix this issue?
I have the same problem! I also have an very powerful HA Server with Ubuntu and Docker.
Sometimes the camera loads 5-15 seconds or never shows up until I reload the frontend.
I tried different resolution and protocolls but nothing helps.

I also have a problem, that the camera stream freezes in a loop of 5 seconds sometimes. The cameras worked well on other apps, but stucks in the frontend. Only restarting HA brings the cameras back to a live image.

Any ideas? Thanks a lot.

Yes, i have the same problem

For me this is also an issue. I have a Foscam C1 camera using the built in Foscam component and the Sonoff WiFi camera via RTSP and it costs a lot of time to load the streams. Also the live-stream is not in sync. Both cams are in great WiFi range. Tested it via multiple browsers, iPhone app etc. Via the official apps the camera’s load up right away.

I have been digging into all these camera components for the purpose of implementing processing and found out what the problem is:
The way the camera components work is to use FFMPEG to go grab a single frame from the rtsp stream every 10s and then immediately after getting the frame, to shut down the stream. This is meant to reduce the load on the HA python instance and is the right thing to do if HA is to run on rPi or on docker container.
The issue is that many cameras and NVRs take time to negotiate and establish the rtsp stream and in my case even an NVR on the same machine running HA has this problem so it is neither a network latency or a cpu limitation. There are intermittent cases of the stream even failing to get established, leading to a timeout and therefore a blank frame or even the HA component considering the camera is offline.
If you have a more powerful CPU and actually want to avoid this problem while not minding keeping the stream open as for an NVR, you could do this. I have modified my FFmpeg camera component to do this using openCV instead and boy what a difference. Out of my 16 cams, now my wifi doorbell is the first one to display (instantly) and never disconnects while all my wired cams are several seconds late and some are getting occasional time out. The downside as I said is that it does keep your CPU busier… so you can’t really do this on a SBC.

1 Like

Could you explain how to do this? My camera stream on homeassistant is 10 seconds delayed behind the stream shown in my synology surveillance station. I have set keyframes to 1 per second, but nothing is reducing this gap.

Having a 10 min delay maybe a related but different problem.
This is caused by you already having a stream open, keeping frames in a buffer but not flushing them out fast enough so they pile up.
I have completely rewritten my FFmpeg camera component to avoid this as as well:

Sorry, that should have read 10 seconds. How can I try our your ffmpeg camera component?

The file I posted is the only one you need to swap out but you first need to know where your homeassistant installation is and find the /homeassistant/component/ffmpeg/camera.py file. rename it to camera2.py and insert the linked one in and you are should be set.
If you don’t have opencv installed, you may need to install it and unfortunately there are now too many different environments for me to keep up. In my case, running in a “bare ubuntu VM”, I just need to run “pip install opencv-python”. If I know what type of installation you have, I may be able to help you find it.

Is this a problem caused by bandwidth or resolution? I have an old phone running IP Webcam app, which is continually recording in 15 minute chunks then FTPing to the same Banana Pi M1 (~RPi3 performance) as I’m running HA on, and yet it still provides a very responsive live view on the frontend:

camera:
  - platform: mjpeg
    mjpeg_url: http://192.168.xxx.xxx:xxxx/video
    name: Drive
    cards:
      - camera_view: live
        entity: camera.drive
        type: picture-entity

This is only 640×480 video though.

It is neither. It is just that the camera component was not designed to show live view on the dashboard. It is only designed to get a snapshot every 10s. It will open a stream if you open the camera entity which does consume quite a bit of resources due to FFmpeg.

1 Like

I’m on home assistant supervised running under Ubuntu server, rather than home assistant core on bare OS as it sounds like you are running. Correct me if I’m wrong, but I believe the right way to do this would be to put it in the custom_components directory under the same directory name as in the local component, which should override the core component, no? I think all the core python files are sitting inside the docker container, meaning I’d have to replace it with every upgrade again…

What do you think?

This is a rather safe bet indeed. I am a bit allergic to docker as I am finding it to add more complexity and multiplicity than I want to deal with so… unfortunately it is the one installation I am the least familiar with.

I threw it into custom_components/ffmpeg, let’s see what happens :wink:

Problem is that I’m using the generic camera component with an RTSP stream, will this do anything in that case?

No they should not interfere. If you do not use a GPU, make sure to edit out the line which shifts opencv to the gpu decoder.

Ah good to know, no gpu.

Is this the one?

os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "hwaccel;cuvid|video_codec;h264_cuvid|vsync;0"

Do you mean to just remove the hwaccel argument or something more?

yes. It is the only one.

So leave it like this or should I really delete the whole line? There are other arguments there, I’d assume I still need to tell it h264, no?

os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "video_codec;h264_cuvid|vsync;0"
1 Like

Just comment it out. opencv/mpeg will detect the formats. This is only to force a specific decoder.