hassio has this preinstalled yet. so you don’t have to do step 1,2. check step 3 in your config. and just take a look in you logs. You can also use htop to see running processes. ffmpeg wil show up if you open the webpage with the camera element in it.
second you can test ffmpeg if it works with some code like: ffmpeg -rtsp_transport tcp -v rtsp://pir_camera_01:5544/live0.264 streamprofile=Recording -vcodec copy -t 20 test.mp4
that makes a 20 second long mp4 from your stream and saves it.
core-ssh:~# ffmpeg bash: ffmpeg: command not found
not sure if hassio needs to install an additional component for this to work.
The log file shows this:
Missing status information from XML of Main for: InputFuncSelect
10:27 PM components/media_player/denonavr.py (ERROR)
FFmpeg isn't running!
10:27 PM components/camera/ffmpeg.py (WARNING)
IP Camera’s are becoming and are going to be the heart of a home monitoring/automation system. I also believe people will want to start using less cloud-based offerings and more local RTSP in the future as they realize the bandwidth costs go up. HA MUST support this elegantly going forward to maintain it’s edge. Here’s what I’d propose for this feature:
Support RTSP natively (or using a tool, but do it in a way that makes sense). BUT more importantly, use websockets (/video endpoint, or similar) to stream it through the UI at a normal video framerate, no more of this 1-2 frames a second stuff.
Maintain a “device map” defaults/config file that the community can add to for their camera URL by model number. It can then be referenced in the HA config by camera-type instead of an obscure URL. There’s no authority on the internet for figuring out the URL for different cameras - so huge opportunity for HA here so solve this.
Create a chainable “motion detection” component that can watch the stream and create events for automations and (video-clip) notifications, or alternatively use the camera’s provided motion detection features if available.
Maintaining a “device map” will be a challenge … but the standard ONVIF protocol is supporting already the identification and connecting to RTSP even the capture of snapshots or video sequences is supported. I’m not sure to what extend motion detection or camera based triggers are supported.
This would be a good starting point and RTSP camera’s supporting the ONVIF protocol are starting from $25.
The configuration for home assistant on the hack project use ffmpeg to convert the video but the result is very bad. I use a RP2 and get high CPU usage.
If I connect directly to the camera, I get a better video image without transcoding. The camera have a rtsp server. Could I use home assistant as proxy without transcoding the video?
Hi,
I have the same camera as you and faced the same issue with the transcoding and I have installed the motioneye add-on and have the camera as iframe card in lovelace. It is not perfect but it is a little better. I run hassio on a RPi3
I ended up ditching this and taking too the motioneye route. I found however that every camera increased like 25% the CPU load of my RPI and eventually moved all my cameras off my RPI
i don’t recommend a pi for motioneyeos or any other cctv projects… the pi is just not the right device for that. I guess we really should think about the native rtsp support as there are many cameras with those stream format. Me I also use the Dafang hack but also running motioneyeos on an intel NUC with no problems.
I second that, any type of motion detection or continuous recording will likely kill the rpi processor with one camera (not to mention the microSD card - although recording can be set to use an external drive). Motioneye in Docker on NUC with multiple cameras recording on a 2.5" HDD could be a nice rival for off the shelf NVR systems.
However, rpi (even zero flavor) with motioneye is still a very good solution if willing to forego these advanced options, or wanting to set rpi as a “dumb” network camera (either with a CSI or USB camera attached to it; can even use one from an old laptop).