I tried all the camera platforms so you don't have to

Yeah. I also think it heavily depends on your workflow as well. All of my notifications are based off of the following workflow, which does not require any viewing of the camera in Home Assistant.

ONVIF Event fires for motion/sound/field detection - Real Time
Trigger TensorFlow Scan for Person/Vehicle detection (and save image in www if found) - adds ~1 second
Send notification to phone with detection results and applicable actionable notifications - adds < 1 second

So all in all, the majority of my use cases where I need verification happen in less than 2 seconds, with picture (not video) confirmation and the ability for me to trigger additional things if necessary. I have taken video out of that workflow for the most part because there is really no need for it. A still is good enough.

I do something very similar for cases where there is a potential intruder. If alarm is on or we are sleeping and a human is detected on the property, send us an emergency notification with a photo. That gives me a pretty good idea if thereā€™s a problem already, but then if someoneā€™s there, Iā€™d definitely want to see what they are doing at that exact moment. I can tap the notification to get to my homeassistant dashboard which has the cameras, which is useful if they are reliable and real time, but currently for these cases I usually skip it and go find my synology app and open that up as itā€™s more reliable and more real time, but adds ~5 seconds time to getting to what I need. Reliable real time cameras in hass would save me some seconds for this case

For the kids, when someone is detected outside and we are home, it just pulls the feed up on the tablets around the house so we can see what they are doing. For this there are advantages of having real time (such as preventing disasters before they happen), or delayed (seeing what happened after something bad happens)ā€¦ The ideal would be a real time feed with an ā€œinstant replayā€ button. I think this could be managed with the recorder functionality, but I havenā€™t tried it yet. Anyhow, again, easier to add delay than take it away.

For package delivery itā€™s a bit trickier to do with object detection since I will have to monitor the street for objects which means it will pick up passerbys too, but I may give it a shot relatively soon and see if I can somehow tune it to be useful.

But Iā€™d still like that combination homeassistant dashboard and video intercomā€¦ I donā€™t really want to add a dedicated video intercom if I can have both.

I did. It didnā€™t work very well. The main problem was that Iā€™m using two streams per camera. The lower res substream is used for viewing, especially remotely. But when something happens, like an intrusion event is fired, I want to record the main 4k stream. The problem here is that the stream component first has to open the stream before being able to record it, and that takes a very long time. Too long to make it useful. And I donā€™t want multiple 4k streams running 24/7. I also never got the lookback feature working when calling the record service, even if the stream was already open.

So after a lot of frustrating days trying to make this work I decided to buy an external hardware NVR instead (A DS-7608NI-I2/8P). I control it from HA over its REST API, where I can start and stop stream recording, access recorded footage, search for past events, etc. It has a very well working pre-record feature, which gives me up to 10 seconds of video before the event occurred. I wrote some connector code that lets a HA automation pull recorded footage from the NVR for instant playback. I also use the feature to automatically create three still shots (one second before the event occurred, one on the event, and one 2 seconds after) and add them to a timeline I can view remotely.

That said I have to agree with @hunterjm about live viewing. I managed to get my delay down to around 4 to 5 seconds. For me thatā€™s fine. I donā€™t actually view live feeds all that often. I mostly use the timeline or the instant playback feature when an event was triggered that someone entered the yard or drove up the driveway or something like that.

1 Like

Xiaomi home 360, anybody ?
Thx

Please do not spam this thread. You are off topic here. Either use the search functionality or start a new thread.

Iā€™d like to post one more thing in this thread just to finish up my general thoughts on this topic.

I think weā€™ve discussed the stream component a lot and that this component is getting a lot of attention and is really doing about the best it can within the limits of HLS, so until we get HLS-LL support in browsers, we probably canā€™t expect much better there. It is great for people who want low CPU usage and donā€™t mind some lag. I also think we discussed other protocols, it would be great to see those, and indeed I think some use cases like video doorbell/intercom are really only possible with SIP or WebRTC, but I acknowledge that is a lot of work. We also discussed merging the components into something a bit less fragmented and better documented. Also likely a lot of work.

What I think we havenā€™t discussed enough is some potentially simple things that can be done with the existing non-stream based camera platforms for people who need lower latency. I feel like many of these camera platforms feel half finished and could possibly be relatively easily improved compared to the other things we discussed.

  • Do we really need one FFMPEG for every browser that accesses each cameras? Ie if we have 3 browsers open to 4 cameras, do we really need 12 FFMPEG processes to handle that? Couldnā€™t we make the MJPEG stream once and just send that to all clients? Could reduce CPU and also open less streams to the camera.
  • What about adding hardware acceleration. I moved to a intel i5-7500 and was surprised that FFMPEG still took about 20% of a core when it took maybe 25% of a core on the old Core 2 Duo. If we had FFMPEG hardware acceleration, I think Iā€™d get a lot more out of this newer processor than I am. As it is, itā€™s really only the extra 2 cores that are helping out, not the better GPU capabilities.
  • If the camera fails to load in the UI, maybe we could get an error message there that tells us what went wrong?
  • It would be good to know in which cases which integration is opening new streams to the camera or in which cases it uses the fixed image URL. Itā€™s often hard to tell what resource Iā€™m consuming depending on how I have things configured. I donā€™t know how many times itā€™s going back and opening the same stream to the camera either.
  • @rafale77 mentioned he was able to make the ffmpeg streams work faster by fixing the way they handled the buffering. Maybe useful to add as an option in the official integration?
  • Why is the ONVIF integration so much more reliable than the other integrations using the same streams? Maybe there is some technology there that we could bring over to the others?
  • It would be good to have control over options about when the streams connect and disconnect. For example with the proxy cameras, Iā€™d like to tell them to always leave the connection to the camera open and reconnect if they lose it so that they are ready to go when needed.

Just a few ideas, I will think of more about 1 hour after I press send on this. I wish my programming skills were up to taking this on myself, but unfortunately Iā€™m still riding the trike and just getting basic python scripts going.

1 Like

I am so confused by this. Iā€™m just lost as to how to set this up as recommended by you. I currently have 4 rtsp http cameras running locally that I have integrated with ffmpeg. Itā€™s not great, seriously would love to improve my feeds in hassos. And having a similar server age/ability to yours, I think I could gain a lot from learning what youā€™re trying to teach here.

I guess to get started, how does one use the onvif integration? I tried to just muscle through install of it, no dice. Missing some info. Any help you can give would be great. Thanks for your time and effort in testing all these platforms.

First you have to know your cameras are onvif compliant, or it wonā€™t work. Then you have to use the integrations part of the UI to add the cameras.

Ah, I kinda figured looking into it further. I doubt this firmware is compliant. Wyze, highly doubt it. Stuck with this then I suppose. Other then disabling the stream component, Iā€™m at a loss I believe. Not that Iā€™m incredibly bothered by it, they are only $25 cameras.

I actually re-wrote my own camera component on my own fork of Home-Assistant.
I asked the HA devs if they were interested in me sending some PRs but they had a better suggestion to my workaround which requires a little bit more coding touching the core of HA. I have not had the time to really poke at it and in my view requires a lot more work and expertise than I am willing to invest in at the moment.

In a nutshell 2 areas of improvements:

  • FFMPEG itself pre-encodes by default to some format which causes an overhead. I have observed this in the watsor custom component as well so it isnā€™t an HA specific. My solution has been to move to an opencv management of the stream into its own python thread which keeps the frame extraction in a raw format unless you need to make use of it. This dropped the CPU/GPU utilization by 50%
  • Home Assistantā€™s camera components are designed to display stuff on the UI. Because doing it 24/7 for every frame would be very heavy in processing load, the default is to get a frame every 10s on lovelace. The issue is that the display requires an encoding to JPEG/MJPEG. For my own application, video processing, this becomes absurd because I actually need the pictures encoded in a numpy array which is what opencv calls ā€œrawā€. So instead of doing H265/H264 -> Raw -> MJPEG -> numpy -> Processing, I added a function in the HA core to enable me to do H265/H264 ->Raw->Processing.
    This cut down my CPU/GPU load by another 60% so I have reduced my CPU load by ~80% for stream decoding. What Watsor decodes for a cost of 20% CPU load per stream, I am doing now for about 3.5% per stream in home assistant.
4 Likes

@VarenDerpsAround Yeah I donā€™t think Wyse has it. If you have an old machine, safest bet is to use the generic camera with stream component and live with the lag. It will at least be low latency. Other low CPU option is the proxy camera component.

@rafale77, I didnā€™t understand much of it but it sounds very promising :smiley:

Just a quick sidenote here. It might be worthwhile to take a look at TileBoard, which is a third party dashboard for HA written entirely in JS. I use it everywhere, because itā€™s so much more flexible than Lovelace (and nicer, but thatā€™s subjective :slightly_smiling_face:).

The important thing about Tileboard with respect to this thread is the way cameras are handled. While streams still run over the HA stream component to get them to HLS (and all caveats mentioned in this thread apply), it will entirely bypass the Lovelace camera rendering, which is pretty inefficient for some reason. Instead it will directly use hls.js (a very popular Javascript HLS decoder, also used by HA internally), which you can tweak to your hearts content - including changing buffer sizes. Thatā€™s how I managed to get my stream lag down to less than 5 seconds.

And if you used camera still images, Tileboard gives you entirely flexibility to refresh your camera images with whatever frequency you like. You could, for example, update your camera shots every 5 seconds by default, but switch to 1 second updates when motion is detected. Or change the update frequency at night, according to your presence status, etc.

1 Like
  • Why is the ONVIF integration so much more reliable than the other integrations using the same streams? Maybe there is some technology there that we could bring over to the others?

This is because the ONVIF integration is more recently maintained (by me) than many of the other camera integrations. Axis is rock solid as well. ONVIF is a standards based protocol that if implemented correctly by manufacturers, makes it a whole lot easier for end-users to configure because the system is able to automatically discover information about the camera and itā€™s capabilities and much of that is able to be hidden from the end user.

2 Likes

After a lot of testing myself I have decided to go with ONVIF and stream: disabled. I have the full resolution, full frame rate streams recorded by my NVR and the substreams linked to Lovelace. Unfortunately trying to have multiple FHD streams into Lovelace at once is not great, but the substreams are good. This setup has full frame rate feeds with no lag

1 Like

Yes, but not possible anymore to expose it as camera to Google assistant :frowning:

True, but I only view my cameras in the HA app anyway. Enabling the stream: component still works for me at full res and frame rate but causes a ~12 second lag.

As for me, while I was migrating to Debian to stay in step with the latest Supervised requirements, I also have migrated to a new server with a modern i5-7500T processor and fast SSD and HDD RAIDs and so am going with the ā€œhigh CPUā€ option of running FFMPEG cameras.

I tried the generic cameras again without stream but they still werenā€™t reliable, but FFMPEG is actually quite stable now that it has the CPU headspace it needs. Once I was able to get as many streams as I needed supported by FFMPEG, I bumped up against the maximum limit of open streams possible with my cameras, though.

To solve this, I am running an RTSP proxy to merge all the streams to hass frontentds into one stream coming from the camera. As the proxy cameras werenā€™t giving me the reliability I wanted, I have moved over to a 3rd party proxy running in a docker container called live555.

This new solution is working very, very wellā€¦ I can open more streams than I need in full 1080p resolution at 4fps on all devices. They open very reliably and display 2-3 FPS even on my kindle tablets with the ā€œliveā€ option enabled on the lovelace cards. On better machines they look like they are running full frame rate. Lag is about 2 seconds. The proxy added 500-700 ms but itā€™s worth it for rock solid reliability in getting the streams open.

It comes at a cost, that 7th generation i5 is running at 50-70% CPU most of the time, so Iā€™m a bit worried that I will have to upgrade it again if I add the 3-4 more tablet dashboards around the house that I have planned, but at least I bought a server with swappable CPUs so it wonā€™t be too painful.

Only issues are that sometimes the cameras wonā€™t connect correctly on hass startup and I have to startup again to get them all working. At least .115 reduces number of restarts needed. Also the time to get them to pull up on the UI is 4-5 seconds which is longer than the proxy component was, but this fairly close to the same as any of the other platforms running 1080p cameras.

Anyhow, these are things I can live with. I finally feel like I have a fairly long term solution here. I will run it for a little longer than then put an update to the original post as to what I ended up doing.

i am all running on ESXI Xeon processors, so also my synology (xpeonology) , so hardware is not the issue he :slight_smile:
i tried also the generic/foscam/onvif/mjpeg/motioneye platforms, for me the best results were coming from the RTSP links exposed from my synology surveillance station
there were the only ones that werent buffering on my google hub devices , only the lag 12-15 sec is my issue

1 Like

Yes the synology platform and synology RTSP streams (I think they are the same thing basically) were also very reliable for me too, but I got low framerates and only the substream which are 4x3 aspect ratio which doesnā€™t look nice with my portrait orientation streams. Some people seem to get different results, so it probably has something to do with how I have things configured on the synology side.

One of the advantages of that platform is that I believe synology is proxying the streams for you so you donā€™t hit the limits of the number of streams the cameras can handle. Similar to what I ended up doing with the live555 proxy server, except with that I was able to proxy the main stream and also do it on the same host as my hass server so Iā€™m not running all the traffic over my LAN from synology to hass server.

well, the synology platform was not able to use together with stream:
i am now using a custom, probably part of 116 , the synology_dsm platform will nog have cameras too, there willl be no seperate platforms anymore ā€¦ with this new synology platform , i have the best results, not with the ones before, they were not using rtsp

1 Like