Local realtime person detection for RTSP cameras

What next?

  • Performance enhancements for running on Raspberry Pi to lower CPU usage and support more cameras per device
  • Official ARM docker builds to support Raspberry Pi
  • Dynamic regions that resize and follow detected objects (at the moment, people are often missed when they stand between regions and this would allow counting and tracking speeds)
  • Face detection (recognition to be added after)
  • Save detections for training custom models or transfer learning

0 voters

1 Like

Dynamic regions also would be a first step towards video recording the person to send in notifications, I assume.

You can create video clips with the record service in homeassistant now for alerts if you want.

Tracking an object across many frames would help prevent blind spots and give me more information to filter out false positives. If the person is completely stationary or moving too fast, I could prevent alerts. I could also compute whether or not the person is approaching or walking away and select the best face the camera saw for that object. It is a much clearer representation of an ā€œeventā€ with attributes such as: object type, time in the frame, min speed, max speed, best face, recognized face, current position, direction of movement, etc.

Once I finish a few more of these issues with Frigate, I want to build a ā€œhomeassistant nativeā€ NVR that letā€™s you view historical footage, event history intermixed with any other homeassistant events, and realtime low-latency video streams.

5 Likes

Iā€™m going to start working on dynamic regions and object tracking. This is going to be a bigger architecture change than was required to track all object types. Iā€™m hoping I can finish this over the holidays.

4 Likes

Great ideas! One use case I wanted to share is how I use frigate with home assistant and motioneye.

Both motioneye and frigate pull an rtsp steam from my camera. Frigate publishes the person scores to ha via mqtt. When ha sees mqtt person score exceed a threshold it calls motioneye api to set motion ā€˜onā€™. This causes motioneye to start recording. When person value drops below threshold ha calls motioneye api to stop motion. Motioneye is configured to save 10s before motion event and 20s after.

This gives me a very low false positive rate and footage before and after a person was detected, I like to see context. Itā€™s a bit cumbersome and I donā€™t love duplicate feeds but it works rather well. Even though frigate benefits from low res streams, this side-band capture approach has me using a higher res to preserve more detail. Iā€™m sure a nicer camera could have two feeds with different resolutions or key frame settings.

Iā€™d like better nvr abilities in ha but Iā€™m concerned itā€™s not the best place for it. Browsing previously captured videos and the most recent event video is about all id ever want. The frigate image with the bounding box is usually enough. My leading/trailing frames is probably more of a me thing so Iā€™m not sure theyā€™ll ever be mainline features. Although ha does have lookback. Itā€™s hls and not rtsp so delay could be an issue. I havenā€™t tested though.

Iā€™m excited for what you can do, the recent steam, camera, and tensorflow integrations have been great for ha.

1 Like

Interesting, I didnā€™t even know there was a record service. So you can record and then have it send that file in the notification instead of the camera stream? Has anyone done this and have an example of how it looks?

Anyhow I still think object tracking could improve it as you could just send the person detected rather than the whole frame (which is usually too big to effectively view in the notification)

I do the same with Synology surveillance station. I have the object detection trigger a custom event in synology which triggers recording and which can be used as a playback filter, which is very useful. Only problem is it doesnā€™t support pre-recording, so it often starts recording late into the event. And I canā€™t use those recordings in notifications unfortunately.

It is always helpful to know how others are using it. Thanks for sharing. I think about frigate as a component of my self monitored alarm system. Every other component of that system lives in homeassistant, and I want to be able to have a unified UI for all security events. It will likely be implemented as an addon using ingress.

I am also already using ffmpeg to save historical higher resolution footage alongside frigate. I want as few services as possible using CPU to decode my video stream.

Iā€™m not doing it, but if you have the stream integration setup, you should be able to trigger recording to save a video for notifications. You will just have to play with the timing of pre and post recording.

This is exactly how Iā€™m using frigate as well, except Iā€™m using Zoneminder and triggering the recording via a telnet switch in HA.

Looking forward to the upgrades, @blakeblackshear! Thanks for the great work.

Happy to share!

Does your use of ffmpeg for the higher res historical footage use another rtsp stream from camera or is it more serial/inline? Iā€™m interested in knowing more.

This may be the add-on that finally makes me switch to hassio.

My cameras support multiple streams, so it is a second higher resolution one. I avoid decoding and resizing wherever possible. Ffmpeg does support multiple outputs from the same input, so if it was the same stream, I could read once and send to multiple locations. I am already doing this in my setup. I write to one location for long term storage and another location for HLS streaming from the same video feed.

It looks like HassOS does not support hardware acceleration, so I am not sure if that will ever be a preferred setup unless I can find a way to add it. I have been running my system on CoreOS aka Container Linux, but that doesnā€™t support hardware acceleration either. Still searching for the ideal OS for running my home. I am looking at k3os too.

If anyone could please share the hardware acceleration portion of their config file, i just want to see whatā€™s worked for you.
I have an i7 - 3770 and Frigate is burning up 30% CPU while checking 2 substreams at 640x480 (5 frames a second)/

I just want to try and get this more efficient, but i think my syntax is incorrect.

Received my Coral edge processor last night, and spent some time getting things setup and partially running. Wanted to publicly thank @blakeblackshear for his project and continuing efforts to make it better. Ko-fi donation happily submitted. :smiley:

1 Like

I plan to build a nuc this week running some os for docker to run frigate and related tools. My plan was Ubuntu server, debian, or maybe rancheros. I would want hw accel for ffmpeg too though, and other than some of your posts about it here I have not researched much.

Ok i found the issue. In one of the updates somehow frigate is now getting the hi res stream. Setting it back to substream should calm it down.

Since moving to v0.3 beta, iā€™ve managed to break the HomeAssistant part of Frigate.
To the best of my knowledge, I have made the changes in the config file to accommodate for the breaking changes.

If possible, could someone please snip that patch of YAML and paste in there? I just want to compare.

Well, I just finished setting up a new frigate install with the v3 beta and last night I swear it was all working . I must have messed something up because Iā€™m not getting objects that exceed the threshold within the defined min/max. In HA iā€™m using the <topic_prefix>/< camera>/<object_type> i.e frigate/front/person. The values published are ON or OFF, you no longer need the json value template.

  - name: Person Front Yard
    platform: mqtt
    state_topic: "frigate/front/person"
    device_class: motion
    availability_topic: "frigate/available"

This should work, but I need to figure out why frigate isnā€™t correctly flagging objects. The frigate test feed correctly shows bounding boxes, the best.jpg image is available but the mqtt message is never published. I see availability messages so i know the underlying mqtt stuff is good.

I used mqtt explorer to see all topics and I have 6 or so objects under my frigate/front topic but they only contain snapshots, no state messages.

How do we enable debugging in the v3 beta?

There isnā€™t a debug mode to enable. After a fresh restart of frigate, do you see any error messages in the logs when you go in front of the camera? What is your frame rate on the camera? In order for an ā€˜ONā€™ message to be sent over mqtt, the total score of all detections in the past 2 seconds must be more than 100. If you only have 1 frame in the past 2 seconds where a person was detected at 90% confidence, it will discard it as most likely a false positive. If you have 2 frames (or 2 persons in the same frame) with scores of 55%, that will total to more than 100, and you should see an mqtt message.

Also, the ā€˜ONā€™ and ā€˜OFFā€™ messages for objects are not retained. If you didnā€™t have mqtt explorer open when the message was sent, you wouldnā€™t see it there. The snapshots are sent with retain.