Rstp video streaming: ffmpeg vs generic camera?


I’m trying to understand which is the best lightweight solution for video streaming…

So far have founded two options:

A) using the ffmpeg integration by adding the code below to the configuration.yaml

   - platform: ffmpeg
     input: rtsp://......

B) using the generic camera integration by adding the code below to the configuration.yaml

   - platform: generic
     stream_source: rtsp://.....

My use case: Baby monitor for the crib so it should be capable of:

  • motion detection
  • noise detection
  • lightweight o hardware (running home assistant on R pi3b+)

I don’t understand the difference between the two options and which one I should implement…

Any input here? Please

Thank you all

You’re using single quotes instead of backticks to try to format your post.

Fixed… I was typing from mobile… cheers

1 Like

There is a great post on this here; I tried all the camera platforms so you don't have to

Thank you… I went through, but my technical knowledge is not up to that article :frowning:

Lots of info but I still cannot get my head around…

Considering my two options, which one would you pick and why? :wink:

I’m looking at simple decision critetia like:

  • cpu usage of raspberry pi3+
  • accessing video stream from multiple home assistsnd device in parallel
  • enabling motion detection
  • enabling noise detection
  • max numbers of cameras: 3

Any good folk that che be patient with me? Please

Use generic camera with stream: enabled in your configuration.yaml.


The code below is all I need to put in my configuration.yaml?

   - platform: generic
     stream_source: rtsp://.....

if yes, how do I use the camera for motion sensor and noise sensor? (in I am not mistaken ffmpeg allows this)

thank you

Yeah, that will do it. If it has an mjpg source, put that as your still_image_url source as it can improve loading times on the camera. And be sure to put


In your configuration.yaml. Never used motion sensor and noise sensor. ffmpeg will be too much for your CPU I think. If it were me, I’d get another box and run frigate for that sort of thing, but if you used frigate only for motion, you might get away with it on your cpu. The transcoding ffmpeg does for rtsp camera streams will kill a Pi quick though. I’ve got 4 cameras and it maxes out my i5 which is many times more powerful.

1 Like

what does this do? on the documentation I only find stream_source is it the same or is something different?

Thank you for the tip… I would love to run frigate, but it seems impossible to get a google coral :frowning:

First search hit for the google search “stream homeassistant”:

1 Like

thank your for this… I stupidity thought it was a property/parameter for Generic Camera

if I look at the parameters, I am still unsure how this can change my “experience” (forgive me, this is like a compete new language for me…)

what do you recommend to change and why?

**ll_hls boolean** (optional, default: true)
Allows disabling Low Latency HLS (LL-HLS)

**segment_duration float** (optional, default: 6)
The duration of each HLS segment, in seconds (between 2 and 10)

**part_duration float** (optional, default: 1)
The duration of each part within a segment, in seconds (between 0.2 and 1.5)

Read my thread about “I tried all the camera platforms” if you want more detailed info. I haven’t played with the HLS parameters yet, I’d say just to leave them as default.