I see no trouble on my test system. If possible that the camera only support one stream? It need to support multible connection to work with hass.
You can send my your debug log on gitter and I can try to help.
@DavidLP we have update aiohttp and all new exception are handle on 0.36. I have a lot of cameras and they work without any issue. But hass copy all stream trought event loop and they can make some small lags with process other stuff in loop. If we change to multiprocessing with async they will fix, since all component will going to own process and loop.
This is the config I am using. Not sure about multiple streams but the HA frontend updates the snapshot while I view the stream through an android app. When I click on the camera image in the frontend, it brings up the window for the camera but does not produce the feed. Sometimes it brings up a static image of what I think is the first snapshot it takes upon restart and just keeps that one in memory even if the frontend snapshot has updated.
To shine some light on the live video feature: I guess right now it is bug free, but will not work an a small form factor computer because for live stream ffmpeg encodes the rtsp stream on the fly to mjpeg. Look here:
That uses 20 % CPU on a high-end machine with 8 cores @ 4.4 GHz ;-). No way that this works on a RPi, except the original video stream is already mjpeg and no transcoding is needed…
How one could get a real live video feature working in hass without transcoding I mentioned in a previous post.
Now I’m experiencing the same bug as you described before. It takes time to get a picture in the UI and there is no live video now, just 1 frame in 30-40 seconds.
I’ve been using ffmpeg for several months on a Win device and a few weeks on Raspberry Pi 3 and didn’t have this problem.
The only thing that I have done is installed Bluetooth LE tracker today which seems significantly affected the performance of RPi3.
Managed to get it to work. You have to add extra_arguments: -pred 1 for the live feed to work.
Two questions, though:
Live feed is very choppy and slow (it takes about 10sec to update image, it can miss a person passing by entirely), is this normal or do I have something badly configured?
While this script is configured, I can’t access the camera via the Mihome app? (my cam is offline in the app, trying to check if it’s normal or something is happening)
In my case, I never used Bluetooth on my RPi, so that’s probably not a solution I can use. The creator of the hack said he will release a new version in the next couple of days, I’ll wait and see if he’s releasing some new stuff. Otherwise, as much as I’d like to stop having my camera upload to the cloud and have it in the Hass Dashboard, I think I’ll stick with the Mihome app, as you loose a lot of cool little functions otherwise, and end up purely with a (somewhat choppy) live feed.
(i thought it’s a generic camera thing, but never mind)
entity_picture is a high-res still image, and I want to have it attach to notifications initiated by binary_sensor.ffmpeg. I can download it manually with wget, but access token changes once in a few minutes, which makes it unusable.
Is there a way to change this behavior, or get those images elsewhere? Thanks in advance.
This link has a script that uses ffmpeg to create a playable file from live video
You can use this script to create to file and use file in notifications
I would expect that your camera has static video stream that could be used
Someone else may know how to place dynamic “access token” in notification automation
As the addition of ffmpeg support to HA a while back has actually solved this as a “Feature Request”, I’d like to mark this thread as solved and lock it so that questions about configuration go into the “Configuration” category which has more visibility to the general forum population.
Does anyone have a compelling reason why I shouldn’t do this though? I’d like to get some input.
This feature will never be come since python don’t support any video transcoding native.
If python will native support rtsp and rtsp go to a web standard, we can add rtsp as native feature to Home-Assistant. Until that will come (I don’t think that will come in next 10 years) we need a tool like ffmpeg to transcode it.
Ok reading it I can see it requires node.js backend so it may work, but not with just HA.
Falls into the category of “we need a tool like ffmpeg to transcode it”
I guess it’s a performance problem, I use a home server and I need about 40 seconds before getting an image… the preview also work fine but the live feed in full screen only get refreshed a few time per minute.
Maybe there’s a way to modify the xiaofang hack to stream a lower quality?