I tried all the camera platforms so you don't have to

Hey, yeah it’s a pretty popular piece of software. Although not sure it’ll run on the Pi architecture so perhaps not!

Took a peek at it… My feeling is that a lot of that functionality is already in hass with it’s built in recording and media access. I will probably capture videos from frigate and use the built in media browser to view them at some point here soon when I get to setting it up.

1 Like

motion works on pi. See motionEYEos.

Also there is a motioneye addon.

I’d not even thought to check on HAAS for the recording side of things, but I’m guessing the problem then is I’d be getting that 40% CPU usage all the time with a 24/7 recording and not just when I view lovelace.

I just had a go with Frigate this morning, pretty easy setup and the object detection is pretty cool! Although back to CPU issues there sadly, I’m only running on a Pi 4 and without a Coral it was using about 80% CPU with the one camera on basic recording. Maybe back to that one when I get a NUC down the line.

motionEYEos no good for me as it takes up the whole Pi rather than just running via Docker, I’ll take a look at the motioneye add on though! It looks like it even supports google drive uploads which was something I was going to do with manual scripts (only got an SD card camera)!

Also docker, check out https://github.com/ccrisan/motioneye/wiki

hi - this is a great thread, thanks a lot for pulling together. I’m using the config below, camera previews appear in lovelace and refresh every 10 seconds. When i click on the camera it opens the feed which streams perfectly.

Is there anyway to get them to stream on the dashboard by default?

camera:
  - platform: ffmpeg
    input: rtsp://admin:****@192.168.100.20:554/Streaming/Channels/602
    name: Front Garden

Do you have stream: turned on? If not, try it. If you do, try turning it off. Also try changing the iframe setting on your camera so it happens more often like once a second.

Yes - have tried with and without the stream checkbox and no difference. I’m picking up the streams from an NVR, will try pointing them directly at the cameras when i’m there next. Thanks!

Hey Adam,
do you have any progress or final solution regarding your IP camera project?
I also want an IP camera for home security with motion detection and recording to Google drive. I haven’t choose any camera yet.
Dou you have proposal which camera can fit and how to do the integration?
Thanks!

Yes and no! I ended up going with a Tapo C200 camera in the end as it was ONVIF + RTSP compatible and supported pan/tilt. The quality is alright but nothing special and the protocols aren’t 100% reliable as they do drop out now and again, the built in motion detection is absolutely awful so isn’t really usable at all.

It’ll do for now, I use the Tapo integration from HACS to view it within HA which then gives me pan/tilt functionality from within HA, that part is nice. I use Shinobi as a lightweight NVR on the same Pi to do 24/7 recording too.

But… would I recommend the camera… probably not.

Thank you for your answer, based on your feedback I would looking for another solution.
Can anyone suggest a future proof, easy to install solution including IP camera, integration and cloud solution?

Out of interest, could you share a bit more about how you did this? I also have a Hik NVR, but am relying on the automation event to capture a still from the camera when the event comes in from the stock Hikvision integration, using camera.snapshot but I’m finding that, with the stock Onvif integration, I’m getting way too much latency between the event trigger, and video stream which means that, more often than not, I’m missing the cause of the event trigger in my captured image.

Why cloud? If you lose internet your recording ability is gone… a thief can take that down very easily before entering your house.

For cameras I use Hikvision turrets (there are plenty of models to choose from based on your budget). They have excellent picture quality and work great with HA.

Cloud is preferred because device (Camera, SDcard, NAS) can be stolen easily. My Internet is SIM card based mobile data with proper router combined with UPS, not so easy to make it down…
The best would be to have a parallel solution with local and cloud recording, but it might be over-securing.

Sure, let me just give you a high level overview over how that works. The details are a bit complicated, but I can cover those too if needed.

I don’t use any camera recording or snapshot function over HA, it’s just too slow. I only use the HA cameras for viewing in the UI. Everything else is handled by the NVR and external scripts called by HA.

Basically the 7608NI-I2 has a circular RAM buffer for external camera streams, which continously captures roughly 10 seconds of video. The amount varies depending on bitrate. As soon as an event occurs, this buffer is written to the internal HDD and live recording is started. So you get the footage of what occurred before the event. The NVR handles all that internally.

Now there are several event types that can trigger recording in those NVRs. The four most common ones are continuous recording, smart event recording, manual REST triggered recording and external alarm signal. The easiest one, continuous recording, was not an option for me as this would require too much storage space. So I went with a combination of event triggered and REST triggered.

The basic logic works as follows:

  • A camera detects an event, like a line crossing. The NVR will automatically dump the RAM buffer to the disk and start recording for around 30 seconds.
  • At the same time this event is sent to HA over the Hikvision integration. A HA automation will determine if the event is considered important, depending on things like presence, alarm state, etc.
  • If not deemed important, things stop here. You will still have about 40 seconds worth of video on the NVR to review if necessary (10 second pre-event and 30 seconds post-event).

If the event was deemed important by HA, an external script (or executable in my case, I’m a C++ dev :slight_smile:) is called from HA doing the following:

  • Three time stamps for the event are calculated using the current time as base: 1 second before the event, at the event and 2 seconds after the event. The time stamps are snapped to an I-frame so to avoid image corruption later when doing the frame extraction. My cameras generate one i-frame every second at 10 fps, which seems to be quite standard in the surveillance industry.
  • An rtsp URL is constructed to access the recorded footage on the NVR starting with the first time stamp (one second before the event).
  • ffmpeg is called (I use the direct C libraries for this to get the absolute minimum latency, but you could also call ffmpeg from a Python script or similar) to open the recorded RTSP stream and extract three still shots from the stream, at frame time intervals given by the time stamps.
  • The shots are saved using a timestamp based naming convention and the HTML for the timeline is updated (I don’t use Lovelace, but it should probably work there over some custom card / iframe).

At the same time, in the HA automation that called the process above, two additional things are done:

  • Recording is retriggered by using a REST call into the NVR’s ISAPI, overidding the default 30 second record time. HA now has manual control over the recording process.
  • A notification is sent to my phone.
  • If the system deems the event to be really serious enough to be considered an intrusion, then additional logic flows are started.

The results look like this. Older shot from last year, I improved on this since, but the basic principle is still the same. This is on my phone. You can vertically scroll through the events. Short tapping an event image will open it full resolution. Long tapping it will open the video of the event footage.

Edit: more up to date image, shows how this is handled for a car. This is a 4k camera, so the license plate is easily readable. You can run a ML model on the generated frames to do automatic license plate reading or face recognition.

1 Like

Hi all,

2021.4 has given me all sorts of problems with my cameras on the tablet and just generally crashing fully kiosk so I’ve been experimenting again. My latest tests have given me worse results with the framerate on the proxy component (averaging 1 frame every 7 seconds), so I’ve updated that.

Also I’ve tried AlexIT’s WebRTC card which promises the best of both worlds, high frame rate realtime video with low lag. And mostly it delivers that, but unfortunately the initial load up time was quite slow (something like 10 seconds), and on my kindle fires they would regularly drop connection and always display a banner that says “connected” over the image, so I haven’t been able to find a good use for them yet as on my desktops and phones where they work well, the FFMPEG cameras work equally well and load faster. I will be checking back in with them from time to time and continue testing with different settings. But I have posted my initial findings on the original post.

Thanks for the updates. For me, I have decided to drop the use of HA on camera streams entirely. I feel that development on that aspect really stalls and even regresses, there have been no real improvements on live camera support in HA since last year. And the current support is, frankly, not good. I used to be more or less happy with my 4-5 sec lag, but with every major HA update this lag seems to be increasing. I also had a real life situation recently where I needed to monitor the live streams remotely - the lag turned out to be a real problem when you actually need the damn things for once. This situation is a little surprising considering how popular and important cameras are these days.

AlexIT’s WebRTC thing doesn’t really work well for me, especially with multiple cameras. When it works, it’s nice, but it’s very unreliable. Streams would often just not open at all, which is arguably worse than having them open with lag. It also requires you to open a port (or to VPN into your LAN all the time), you can’t just reverse proxy the stream as you could with HA.

HA camera support is still good for generating preview shots every n seconds.

I agree. I’ve had to use the sub-streams of my cameras for displaying in HA so that I could setup the cameras with the sub-stream at a lower resolution. My NVR records the full HD streams and HA just displays the low res. Otherwise HA often wouldn’t even open them, and my HA runs on an i7 machine with 16Gb of RAM.

I’m still waiting for proper integration of audio from camera streams as my front door camera has two-way audio, but this is not at all accessible using HA. Currently HA only supports a few audio codecs and none of them are the common CCTV camera ones. Pretty poor really.

I have a few Dahua Dome Lite 4MP cameras which I use to see what is happening in my apartment and for motion detection for switching the lights. Latter mostly works, but I am having some problems with the first. Onvif integration sets up only substream1 which is unusable because of low resolution. I believe it is because of h265 codec? Is there a way to make this work?
I am also curious if it is possible for HA to work as a NVR for the cameras?

Thank you for your great work in documenting the various camera platforms. Really save me tons of time to test which works best.

Just to update on my findings of AlexIT WebRTC card. I’ve been using AlexIT WebRTC card for 3 weeks now on 2 sites. So far its been promising and loads successfully all the time. The lag is less than a second but sometimes it will skip so frames which could be a combination of camera CPU or network issues. I’m using it for to open/close my gate so its a good upgrade from the 4-5s lag.

@HeyImAlex the latest version of AlexIT card does not need port forwarding. It works for me and I did not open any ports on my router.

1 Like