I tried all the camera platforms so you don't have to

I am so confused by this. I’m just lost as to how to set this up as recommended by you. I currently have 4 rtsp http cameras running locally that I have integrated with ffmpeg. It’s not great, seriously would love to improve my feeds in hassos. And having a similar server age/ability to yours, I think I could gain a lot from learning what you’re trying to teach here.

I guess to get started, how does one use the onvif integration? I tried to just muscle through install of it, no dice. Missing some info. Any help you can give would be great. Thanks for your time and effort in testing all these platforms.

First you have to know your cameras are onvif compliant, or it won’t work. Then you have to use the integrations part of the UI to add the cameras.

Ah, I kinda figured looking into it further. I doubt this firmware is compliant. Wyze, highly doubt it. Stuck with this then I suppose. Other then disabling the stream component, I’m at a loss I believe. Not that I’m incredibly bothered by it, they are only $25 cameras.

I actually re-wrote my own camera component on my own fork of Home-Assistant.
I asked the HA devs if they were interested in me sending some PRs but they had a better suggestion to my workaround which requires a little bit more coding touching the core of HA. I have not had the time to really poke at it and in my view requires a lot more work and expertise than I am willing to invest in at the moment.

In a nutshell 2 areas of improvements:

  • FFMPEG itself pre-encodes by default to some format which causes an overhead. I have observed this in the watsor custom component as well so it isn’t an HA specific. My solution has been to move to an opencv management of the stream into its own python thread which keeps the frame extraction in a raw format unless you need to make use of it. This dropped the CPU/GPU utilization by 50%
  • Home Assistant’s camera components are designed to display stuff on the UI. Because doing it 24/7 for every frame would be very heavy in processing load, the default is to get a frame every 10s on lovelace. The issue is that the display requires an encoding to JPEG/MJPEG. For my own application, video processing, this becomes absurd because I actually need the pictures encoded in a numpy array which is what opencv calls “raw”. So instead of doing H265/H264 -> Raw -> MJPEG -> numpy -> Processing, I added a function in the HA core to enable me to do H265/H264 ->Raw->Processing.
    This cut down my CPU/GPU load by another 60% so I have reduced my CPU load by ~80% for stream decoding. What Watsor decodes for a cost of 20% CPU load per stream, I am doing now for about 3.5% per stream in home assistant.
4 Likes

@VarenDerpsAround Yeah I don’t think Wyse has it. If you have an old machine, safest bet is to use the generic camera with stream component and live with the lag. It will at least be low latency. Other low CPU option is the proxy camera component.

@rafale77, I didn’t understand much of it but it sounds very promising :smiley:

Just a quick sidenote here. It might be worthwhile to take a look at TileBoard, which is a third party dashboard for HA written entirely in JS. I use it everywhere, because it’s so much more flexible than Lovelace (and nicer, but that’s subjective :slightly_smiling_face:).

The important thing about Tileboard with respect to this thread is the way cameras are handled. While streams still run over the HA stream component to get them to HLS (and all caveats mentioned in this thread apply), it will entirely bypass the Lovelace camera rendering, which is pretty inefficient for some reason. Instead it will directly use hls.js (a very popular Javascript HLS decoder, also used by HA internally), which you can tweak to your hearts content - including changing buffer sizes. That’s how I managed to get my stream lag down to less than 5 seconds.

And if you used camera still images, Tileboard gives you entirely flexibility to refresh your camera images with whatever frequency you like. You could, for example, update your camera shots every 5 seconds by default, but switch to 1 second updates when motion is detected. Or change the update frequency at night, according to your presence status, etc.

1 Like
  • Why is the ONVIF integration so much more reliable than the other integrations using the same streams? Maybe there is some technology there that we could bring over to the others?

This is because the ONVIF integration is more recently maintained (by me) than many of the other camera integrations. Axis is rock solid as well. ONVIF is a standards based protocol that if implemented correctly by manufacturers, makes it a whole lot easier for end-users to configure because the system is able to automatically discover information about the camera and it’s capabilities and much of that is able to be hidden from the end user.

2 Likes

After a lot of testing myself I have decided to go with ONVIF and stream: disabled. I have the full resolution, full frame rate streams recorded by my NVR and the substreams linked to Lovelace. Unfortunately trying to have multiple FHD streams into Lovelace at once is not great, but the substreams are good. This setup has full frame rate feeds with no lag

1 Like

Yes, but not possible anymore to expose it as camera to Google assistant :frowning:

True, but I only view my cameras in the HA app anyway. Enabling the stream: component still works for me at full res and frame rate but causes a ~12 second lag.

As for me, while I was migrating to Debian to stay in step with the latest Supervised requirements, I also have migrated to a new server with a modern i5-7500T processor and fast SSD and HDD RAIDs and so am going with the “high CPU” option of running FFMPEG cameras.

I tried the generic cameras again without stream but they still weren’t reliable, but FFMPEG is actually quite stable now that it has the CPU headspace it needs. Once I was able to get as many streams as I needed supported by FFMPEG, I bumped up against the maximum limit of open streams possible with my cameras, though.

To solve this, I am running an RTSP proxy to merge all the streams to hass frontentds into one stream coming from the camera. As the proxy cameras weren’t giving me the reliability I wanted, I have moved over to a 3rd party proxy running in a docker container called live555.

This new solution is working very, very well… I can open more streams than I need in full 1080p resolution at 4fps on all devices. They open very reliably and display 2-3 FPS even on my kindle tablets with the “live” option enabled on the lovelace cards. On better machines they look like they are running full frame rate. Lag is about 2 seconds. The proxy added 500-700 ms but it’s worth it for rock solid reliability in getting the streams open.

It comes at a cost, that 7th generation i5 is running at 50-70% CPU most of the time, so I’m a bit worried that I will have to upgrade it again if I add the 3-4 more tablet dashboards around the house that I have planned, but at least I bought a server with swappable CPUs so it won’t be too painful.

Only issues are that sometimes the cameras won’t connect correctly on hass startup and I have to startup again to get them all working. At least .115 reduces number of restarts needed. Also the time to get them to pull up on the UI is 4-5 seconds which is longer than the proxy component was, but this fairly close to the same as any of the other platforms running 1080p cameras.

Anyhow, these are things I can live with. I finally feel like I have a fairly long term solution here. I will run it for a little longer than then put an update to the original post as to what I ended up doing.

i am all running on ESXI Xeon processors, so also my synology (xpeonology) , so hardware is not the issue he :slight_smile:
i tried also the generic/foscam/onvif/mjpeg/motioneye platforms, for me the best results were coming from the RTSP links exposed from my synology surveillance station
there were the only ones that werent buffering on my google hub devices , only the lag 12-15 sec is my issue

1 Like

Yes the synology platform and synology RTSP streams (I think they are the same thing basically) were also very reliable for me too, but I got low framerates and only the substream which are 4x3 aspect ratio which doesn’t look nice with my portrait orientation streams. Some people seem to get different results, so it probably has something to do with how I have things configured on the synology side.

One of the advantages of that platform is that I believe synology is proxying the streams for you so you don’t hit the limits of the number of streams the cameras can handle. Similar to what I ended up doing with the live555 proxy server, except with that I was able to proxy the main stream and also do it on the same host as my hass server so I’m not running all the traffic over my LAN from synology to hass server.

well, the synology platform was not able to use together with stream:
i am now using a custom, probably part of 116 , the synology_dsm platform will nog have cameras too, there willl be no seperate platforms anymore … with this new synology platform , i have the best results, not with the ones before, they were not using rtsp

1 Like

here is the PR

1 Like

There is an issue where viewing the full stream of an Onvif camera can cause excessive CPU usage even after you close it out. I’m using the Onvif integration with ffmpeg. I only have 2 cameras. If I use the main stream at 1080@15fps I can watch one, one time and the cpu will jump to 30-40% and stay there. View the second stream it jumps up to 80% or so and stays there. Close it out and open it again and HA becomes nearly unresponsive with 100% cpu. I have to reboot the VM at that point. This is running hassio on win 10 with an i7. There is no way only 2 1080p streams should kill a system like that. I’m watching those same streams on my 3700X box using VLC and am literally using .2 cpu total for vlc. Windows task manager uses more cpu.

I’m at a loss at this point.

I’d bet if you reduced the frame rate it wouldn’t be as bad, but yes, that does sound even worse than I’d experienced.

Is there a way to reduce the frame rate in the integration? I’m not going to reduce it on the camera. I want that frame rate and resolution so if/when I need to review footage I can see what someone is doing.

Only option for that is the camera proxy. It should definitely help you a lot with CPU usage.

Though I never understood the purpose of high framerate security cameras. I will trade high image quality and low CPU, network, and disk use for a high framerate any day. 4FPS is plenty to capture anything that might happen unless you are trying to get pictures of bullets.

1 Like

More frames means more chances at capturing a clear image of some minute detail like a license plate.

At my house we’ve had issues with hit and run’s concerning our cars being parked on the street. My truck has been hit twice in the same corner, the wifes car once less then 3 months after she bought it. I think she only had like 2K miles on it. And then my mothers SUV had the door collapsed in by the neighbors boyfriend. He couldn’t run away from that one the whole neighborhood heard it. Since then I installed cameras on the driveway and on the front of the house to catch anyone who might smash into our cars again. I want a nice shot of the plate the next time it happens. I don’t care about network or hard disk usage if it means the next time I can beat someone silly for running. Hard disk space is cheap and so is cat6, multiple insurance claims not so much.

1 Like