I tried all the camera platforms so you don't have to

Yep, have that set as well. I’ve tried using TCP, UDP, and UDP multicast and nothing has made a difference. Oh well…

Test it with VLC and see if there’s any difference. If VLC isn’t low lag then you are hosed. Try setting the camera to UDP rather than TCP and make an iFrame every second.

Would you have by any chance some details on how you setup live555 ? How do you define all the cameras on the live555 docker ?

Thanks !

Here’s the docker compose. It hasn’t been quite as perfect as it seemed as first. It will run solid for a day or 2, but then 1 or more cameras will drop off for 10-15 minutes and come back… So still not perfect, but the quality is amazing and I haven’t found any perfect solution so I’m staying with this one for now.

version: '2'
services:
  proxy:
    image: migoller/live555proxyserverdocker
    command: -v "rtsp://uname:[email protected]:554/Streaming/Channels/101?transportmode=unicast" "rtsp://uname:[email protected]:554/Streaming/Channels/101?transportmode=unicast" "rtsp://uname:[email protected]:554/Streaming/Channels/101?transportmode=unicast" "rtsp://uname:[email protected]:554/Streaming/Channels/101?transportmode=unicast"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    ports:
      - "554:554"
      - "8080:80"
2 Likes

Hi,

I am wondering if the enhancement to synology_dsm will allow the camera streams to be shown on google assistant, as I have been having issues trying to integrate my synology surveillance station cameras into google assistant. I know i can view the streams/entities in HA, but getting them into google assistant hasn’t been fruitful. The only reason I want to view my streams on google assistant is so that way I can see it on my nest max hub display.

If there is another way to see the camera from surveillance station via HA to my nest max hub display I’m all ears.

Thanks

What about cast? 0.99: Withings, Device Automations, launch Home Assistant Cast from Python. - Home Assistant

I use the rtsp streams from synology the display on google hub, works fine

Hi,

still being a newb to HA, and part about how to cast to my nest hub max is eluding me as the documentation below indicates adding a cast entity row to the UI. When I do choose the entity cart, the information doesn’t quite lineup on how to cast from there.

If you have some pointers I’m all ears.

thanks

Hi,

I will give that a try then, thanks for the info.

In the future based on the link that you provided way earlier about the development on the synology-dsm package do you think you’ll still need rtsp streams, or can the package automatically help integrate it better with google assistant?

Thanks

Call the service cast.show_lovelace_view

@scstraus
Thanks for your thread it has been helpful and good to see someone is pointing this stuff out. I developed the support for cameras in openHAB and before doing more development and learning the soon to be released openHAB V3 I thought it was time I checked out HA and what it has to offer before I lock myself into a platform long term by doing development for my DIY devices. Some comments follow as my findings agree with what you posted…

There is licensing and money involved in some areas of this and it is a complex topic, what is easy for a commercial project is not so easy for an opensource one.

Yes that is my biggest issue with HA, it is a total mess and this thread was helpful but still finding info on each of the fragmented methods is not making me like HA. Each components documentation should have a working current example that you can follow, sadly it is not the case. Even the description of what each component does to me is not clear and concise and that is someone who understands the protocols at a low level. Perhaps you can fix the docs up?

openHAB does this and it is a good idea. I can have 1 single open stream from a camera going to 6 different tablets/phones with <1 sec delay behind real time and <10% load on a PI4. The best thing about what your suggesting is you don’t have to wait for FFmpeg to get up to speed, the extra devices load the stream instantly.

That can be done by compiling ffmpeg to have hardware accel turned on. However sometimes the software method gives better results over a hardware acceleration alternative. The other way to approach this is to buy cameras that can do the job on board. Funny idea but sometimes paying more for a camera is better value in the long run.

Gets my vote, my main Hikvision cameras fail to work, yet no reasons in the logs. I’m sure this is solvable I just have no starting point as the logs don’t show a thing. EDIT: Solved it was my main stream set to 4K resolution it did not like, enabled the 3rd stream and now it works.

You just described openHAB only you dont need to switch, you can simply ask for what you want at any point and it is given to you and in most cases only produced on demand and not needed to be left running. I like the HA user interface and how quick and easy it is to get the camera added to lovelace, but that is a one time setup and to me it is how things run under the hood after they are setup that counts. The HA onvif component is great and I look forward to seeing what future updates bring.

2 Likes

Hi Pergola,

I just configured my HA to do RTSP camera feeds, and even got it to show up on my nest hub max, except the problem i am not facing is the interface is showing the message on the nest hub max:

“error: unable to connect to the home assistant websocket api”

you have any thoughts why this could be the case?

Thanks!

no idea, mine just works, using nabu casa here
maybe something with your internal/external url ?

I do try to contribute to the docs where I can. I haven’t found a case where they didn’t have the configuration example for cameras (because I usually fix those when I see them), if you have a specific one, let me know. As for many of the other things on this thread, I’m not sure they belong in the docs because honestly I don’t understand what’s happening behind the scenes well enough to make a definitive statement as to how these things work. I am just extrapolating based on the testing I’ve done and sharing my hypotheses here. I’d love to get confirmation from the devs on some of these suspicions, and then I’d feel comfortable to add them to the docs.

Oh man, now I’m really getting jealous. Can we just drop the code in from openHAB?

Yes it can, but unfortunately most of us are running supervised in docker, so even if we do go through the considerable work of compiling for our hardware, it will be overwritten every time we upgrade. I would like to see some way we could enable this as an option within supervisor and have it handle that part.

Maybe an addon could provide a custom compiled ffmpeg?

1 Like

In terms of actual video streaming (i.e. not mjpeg with high refresh rates), on basically everything other than iPhones (and older pre-iPadOS iPads), it is not hard to do better than normal HLS. The trick is the concept that LL-HLS calls partial segments, and CMAF calls chunks. These are basically much smaller fragments (moof+mdat) that don’t necessary start with or contain a keyframe. Chunks can be as small as 1 frame’s worth of samples. For LL-HLS you want them bigger, since one HTTP request is required per LL-HLS “partial segment”, (unless doing fancy things with byte ranges). Officially chunks/“partial segments” are required to consist of a subset of the video frames from a full sized segment.

I’ve done some experimentation using a slightly modified version of the stream component, set up to generate chunks of 100ms in duration (or at least one frame of video slower than 10 fps), and send them to the browser over websockets. The browser uses Media Source Extensions to pass this into a video tag (which is why iPhones and older iPads won’t work, since apple deliberately disabled MSE on those devices). Using that, I was able to get latency in a browser that is lower than using VLC to watch the RTSP stream with default settings. (By default VLC uses a 1 second buffer). Under this technique, latency is also independent of key frame interval, which only influences how long it takes to load the stream.

My experimentation was only with 1 camera at a time, and the code I used is not really suitable for merging into the stream component, since I took the easy path of breaking HLS support while testing.
To avoid breaking HLS support, I would need to create both chunks and segments, which is needed for LL-HLS anyway. Per the new HLS spec, it is legal for a segment to consist of concatenated chunks [0], so this is not necessary particularly difficult.

To do this right we would just need to render to smaller segments (which we label as chunks) based only on time (ignoring keyframes). On top of that track when we would want a new full segment to begin. At that point, as soon as the next keyframe is seen, force a new chunk to start, even if it is “too soon” per the 100ms timeframe. Keep track of which chunks belong to a complete segment.

When requesting a full segment via HLS, just serve up the concatenation of the chunks that occurred in that segment (without the “initialization section” of each chunk of course). Later when LL-HLS support is added, the chunks would become the “partial segments”.

For a low latency websocket connection, we would simply push chunks as they are generated. The first chunk pushed would include the initialization segment, all others would omit it.

Footnotes:
[0] If one is using sidx boxes, a top level index for the whole segment really ought to be made that points to the chunk level indexes, even though it is not strictly required.

3 Likes

Any chance to try it as a custom component?

Not currently. It is really hacky code at the moment, and I was using a custom web page as a front-end, since lovelace does not have support for this. This was more a test to see if low latency without LL-HLS was even possible, at least on non-iPhones. and it seems like the answer is yes.

If I find myself with some extra time soon, I might try to make a slightly more polished version, that could be testable as a custom component. The front-end part is is trickier though.

3 Likes

A million thumbs up for the ONVIF integration. It’s the only platform which reliable integrates HA with my Lorex NVR and cameras (unfortunately purchased before I got into HA or I probably would have taken a different path). My current limitations lie entirely with my lowly RPi 3.

1 Like

Yes that was my conclusion when I was trying out Low Latency HLS and DASH, that creating the streams is the easy part. The players Google/Chromecast and browsers still want to buffer the stream and this adds on the same amount of lag/delay so really all you achieve is the same result, but use a lot more http traffic, and with worse compatibility.
It was probably more than a year ago I tested it so may have changed.

No, but it is not difficult to implement (if it is not already possible). You can also setup a video server like Blue Iris to do the job for you. I really don’t like the idea of using a PI that is expected to run your automations and give a snappy UI, to also generate multiple mjpeg streams unless it is for only occasional use and only a single stream.

Thanks for getting them added, I have visited a few times over the years and it seems to be a lot better now.
How to get PTZ working? I do not see a single example in the ONVIF platform docs that show what to put in configuration.yaml or the steps to do it via the UI. Sure I see it being mentioned, but zero info on where to head to learn how to get it working. I’m slowly working HA in general out, but it would be nice to have a ‘how to get a camera working’ page that goes over all the platforms and when to use X over Y and how to get PTZ, how to cast and a few other common tasks covered. This is why your thread was great timing as it covered the different platforms, some of which appear to be overlapping in design.
I suspect some of my issues are from not understanding the basics and there being a gap from the beginners guide to being left trying to setup yaml based platforms.