Timelapse Video with Computer Vision | Make timelapse with bounding boxes of detected objects using your own trained OpenCV model!

TL:DR

  • COVID lockdown life…… I love my view of the port, but operations are slow-mo, and the operations analyst in me wants to understand what is going on better!!
  • Display looped timelapse videos with object detection including bounding boxes in Home Assistant Lovelace frontend.
  • Mash a bunch of videos and some summary stats onto a dashboard. Add some automations so that when you sit on your couch it casts to a screen.

The video is a bit fluffy as I mainly did it for a general audience.

My final pipeline (working reliably) is:

  • Capture snapshots: Capture time interval snapshots of images from a Home Assistant camera and write them to a local directory with timestamp in filename using camera snapshot.
  • Train your own OpenCV object detection model (optional): Train and test your own OpenCV object detection model using Cascade Trainer GUI . You could use a pre-trained one though for faces/people etc though or train via other approaches.
  • Detect objects: Use trained model in Home Assistant via OpenCV integration. Locally process the local snapshot and get a sensor containing the number and bounding boxes for detected objects.
  • Draw bounding boxes: Use Shell commands to ffmpeg to draw the boxes on the image using the drawbox filter.
  • Create short/long term image archives: Use Shell commands to create various rolling time-period archives of the images for the purpose of higher framerate short term videos and lower framerate long term videos. Crop them and stuff too.
  • Convert short/long term images to timelapse videos: Use Shell commands to ask ffmpeg to turn the image archives into timelapse videos (next step requires .mp4.)
  • Display and loop videos in Lovelace via webpage cards.
  • Add a few sensors/charts/other things.
  • Add some automations so that when you sit on your couch it casts to a screen (motion sensors and Browser Mod)

Example Hardware list;

  • EKEN h9 or DIGOO DG-MYQ camera
  • Add a zoom lens
  • Raspberry pi 4 (works fine for local OpenCV image processing, at least for my framerate/latency requirements)

TODO:

  • I’ll add detail to this first post as requested and probably polish it a bit
  • Add some code to this post, probably prioritised/detailed based on demand.
  • Figure out how to drawtext on image (timestamp). Can’t get it working!
  • Better camera mount to reduce wind shake.

Some of my key learnings:

  • Make Timelapse: You can make timelapses via other approaches like straight out of Motioneye add-on, but piping images together with ffmpeg gives you a lot of control over size, quality, cropping, etc. Took me a while to learn the shell and ffmpeg commands though. I had to reduce framerates and resolution a fair bit to get final playback working smoothly on the crappy Android TV box that is connected to my display
  • DIY OpenCV model: Whilst you could do it all in Python etc, the Cascade Trainer GUI made training/testing really easy (once you knew its quirks)
  • Looping videos: Only way I could find was embed video in webpage, which requires .mp4. I explored a local file camera but couldn’t get it to loop.
  • Drawing boxes: The way I fed a ffmpeg drawbox filter was a bit hacky? Surprised it’s so hard to get an image with the bounding boxes from a lot of the object detection integrations?
  • I had limited experience with any of this before I started. Took me ages to learn and get it to actually work. No doubt better ways to do some of it.

Some useful resources:

Code placeholders

  • Capture snapshots:
  • Train your own OpenCV object detection model
  • Detect objects
  • Draw bounding boxes
  • Create short/long term image archives
  • Convert short/long term images to timelapse videosz
  • Display and loop videos in Lovelace
  • Add a few sensors/charts/other things
  • Cast to screen when sit on couch
5 Likes

HI, Have you an idea how can I make a overlay of sensors on top of the timelaps videos and record them into the archieve?

Not really I’m afraid.

I recall trying to overlay text on the camera images using a feature of the ffmpeg library at some point. So maybe there is a way there.

I think Motioneye also has some features for that too, but I’m not sure if you can access Home Assistant variables. Motioneye also has decent timelapse features if you don’t need something too customised.