Local realtime person detection for RTSP cameras

Did you got it working?

Nope, unfortunately not. :frowning:

1 Like

Hi

Firstly thank you! Frigate is awesome and i can’t wait to see what it becomes with Further development. I’m looking at Migrating away from BlueIris because of the great homeassistant integration.

I’m just struggling with one issue.
I want to use my sub-stream for 24/7 recording and my Main stream for recording clips which works great. However when i click on the feed in HomeAssistant I’d also like to see the main stream and not the low res sub-stream. Is there a way to do this?

My config looks like this

cameras:
  frontyard:
    ffmpeg:
      inputs:
        - path: rtsp://admin:@[email protected]:554//h264Preview_01_sub
          roles:
            - detect
            - record
        - path: rtsp://admin:@[email protected]:554//h265Preview_01_main
          roles:
            - clips
            - rtmp

Thanks again!

Good day gents,
anyone can suggest recommended ffmpeg settings to improve the latency with homeassistant?
basically I get a presence notification/picture about 10 seconds before I can see the object in the homeassistant camera view.

Thank you

Hi all,

I have a question about Frigate on a Odroid N2+ with a Coral USB TPU
My CPU load is relative high :


I can even reach 90 ~ 95%

My suspicion is that the ordoid N2+ does not do ffmpeg on hardware causing this? Is that true and can I do something about this?

One other thing, object recognition is awesome! But only with daylight, when it’s getting dark it is not working anymore, 2 camera are Ubiquiti G3 and G3 flex, I also have a EZViz C3W, but that doesn’t work either… it works better with Unifi Protect, but I want HA to detect :slight_smile:

It’s complicated but there’s also this (still need to try the latter myself).

Hi all, fairly new frigate user here! First of all thanks for the awesome project! Unfortunately I have some troubles setting it up properly. Setup is working fine, I have / had it running as a hassio addon and in a separate docker container (Raspberry Pi and Macbook), but I can’t seem to get a stable stream to detect objects / view my camera images. I read a lot of posts and guides but none seem to answer my specific question.

  • Camera: Reolink E1 Pro (it has an rtsp stream, but apparently no rtmp, as far as I can tell).
  • Hardware: Raspberry Pi 3b - Docker Version or Macbook Pro (late 2013). Both not the most powerful HW

First problem: I cannot get an stable substream to detect object. It is either all green screen (while using some input_arguments) or it is flickering / has artefacts. Stream size is 640x360px, 7fps, 192kbit/s

Seconds Problem: Using the main stream to detect images, I think both of my hardware options are not powerful enough. Frigate only works with 0,2 to 1,3 fps (displayed in frontend).

What are my options here? Is this only a hardware problem and more powerful hardware, e.g. Raspberry Pi 4 and Coral TPU will solve this? Or is this related to the Camera Hardware (esp. the Substream Problem), or do I need to tune the ffmpeg stream with input_arguments?

Hi,

so lowering the timeout for best_image produces more images… Will try that tomorrow and report back.

Regarding your suggestion about the “clips”: I have multiple cameras, some show areas with heavy pedestrian trafic. If I am forced to watch all clips (not really happy with the post_capture und pre_capture), the evaluation of all cameras takes up way more time! Checking all pictures is way faster. But thanks for the tip!

As i said, be back tomorrow with more about best_image_timeout…

Thank you.
Cheers
Chris

Interesting, that makes sense. I’m not 100% certain that reducing the best_image timeout will work. I just checked the documentation and it says the following:

# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60

Another thing to play with could be max_disappeared in the detector settings:

  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25

Lowering it drastically would perhaps create a lot of unique events and snapshots per target as Frigate bounces above and below the detection threshold - which itself could be adjusted to achieve the desired effect.

I’ve just been playing with Node-RED as another possible solution, and it did indeed save several snapshots when I walked around in front of the camera. Not sure how elegant this solution is as I’ve never really used Node-RED, but I’d expect it to save loads of snapshots in a high traffic area.

thank you very much for your response.
Im trying the the FFMPEG cameras with the Live555 RTSP proxy and testing

for webrtx i just cant get the card to work. keeps saying “Custom element doesn’t exist: webrtc-camera”

Hi,

I’m running frigate on a dedicate docker container,
I’m wondering if it could be possible to display MQTT frigate/stats as entities card

anyone already explored it?

Regards
Dario

1 Like

Hi found this
Realtime object detection on RTSP cameras with the Google Coral (reposhub.com)

with this sensor

sensor:
  - platform: rest
    name: Frigate Debug
    resource: http://localhost:5000/debug/stats
    scan_interval: 5

it works like a charm

Dario

Hi all - To double check, is Frigate using the model MobileDet SSD (not MobileNet SSD), input size: 320x320?

from google-coral on Github?

Thanks

Can someone help me fill in the last missing part of my setup?

I have Frigate working on 5 cameras and it’s working great with Hassio integrations, node-red alerting, etc.

The only thing I’m missing is how to move the folder that Frigate uses by default. I want to start keeping 24/7 recordings but I need Frigate to use a different drive than the default Hassio media folder. With docker-compose it would be pretty straight forward by changing the volumes. I just don’t understand where I can do that in the HA space.

Good day gents, after a few days playing with this component I wanted to share a bit of my experience.

I’ll start with thanking the developer @blakeblackshear for producing and sharing this software.
The object detection, recording and direct integration into home assistant is seamless and just works very well.
I have not come across another piece of software that does all of this, this well and without having to mend different pieces.

The only struggle I’ve had is related to latency and after reading several associated posts I now understand that it is more of a Hass limitation that anything else.

In my case 4 cameras had an average latency of approximately 10secs which is not a big issue for me, but my doorbell had about 35 seconds, which is significant.

I tried some suggestions namely, config, webrtx component and the live555 rtps proxy but I didn’t get significant improvement.

Ultimately what worked for me is zoneminder. Loading the cameras into zoneminder for monitoring only and adding them into HA via the HA integration produced the best results. Nearly imperceptible latency.

So I have frigate doing the whole object detection, recording and notification whilst zoneminder only provides a “window” and I’m happy with the result.

Once again thank you!

2 Likes

Curious… Is your HA running on a PI or a NUC?
I’m having near to 0 latency with my 9 cams running on my NUC.

Second question: Camera brand? I’m using ReoLinks with RTMP streams. Runs flawless…

On one of my Lovelace Dashboards I have placed one of the still images of person detection.

type: picture-entity
entity: camera.front_person
tap_action:
  action: more-info

Is it possible to configure so that when it is clicked it goes to the clip of that person trigger.
I see there is a clip API:

https://HA_URL/api/frigate/notifications/<event-id>/<camera>/clip.mp4

However not sure how I grab the “event-id”
Looking in Developer Tools / States my camera.front_person entity does not give an ID:

access_token: d2ab1f018bef12a43fd735815273194c3ab751451bf8986d509432d5a015f0ff
friendly_name: Front Person
entity_picture: /api/camera_proxy/camera.front_person?token=d2ab1f018bef12a43fd735815273194c3ab751451bf8986d509432d5a015f0ff
supported_features: 0

Unraid server on a ryzen3900x, 64gb ram. Plenty power.

Foscam cameras and that maybe part of the problem. But zoneminder doesn’t seem to care.

I got it working! And it performes good even without Google Coral.

See this thread:

@blakeblackshear I have setup Frigate on Unraid as docker and installed the companion app within HA and everything seems to be setup right. I don’t get any errors while starting up Frigate Docker and i see all the MQTT notifications etc in HA. I also see clips and recordings in the media browser. I currently have 2 issues which i cannot figure out:

  1. My detection for cameras stops working almost immediately (it records maybe a clip or two from one camera) before it just stops registering any person or movement. There are no errors or messages. Frigate is still running, but does nothing.

  2. The recorded clip (using higher resolution rtsp stream) is without audio even though I have verified the same stream via VLC is with full audio. Is this as expected?

My cameras are unifi G3 running in standalone mode
I currently do not have a Coral device or anything else. Currently just using CPU to test. My server is 12 core / 24 threads and its not being taxed very much, so its not a cpu or memory bottleneck

I cannot seem to find a way to upload/attach my config.yml here.