Local realtime person detection for RTSP cameras

Frigate works great! Thanks for putting this together.

Now I need to know how to train it to recognize different objects. In particular, I wish we had it trained for “generic animal” rather than “dog”/“cat”/etc. My cameras don’t tend to see many giraffes or elephants - but lots of squirrels, raccoons, deer & coyotes. It gets the coyote as a “dog” (which is it) and bobcats as “cat” (OK, I can live with that) but it normally totally ignores deer and other critters I’d like to alert on.

3 Likes

Coral dual m.2 with E->M adapter in NUC 8 running on HAOS6rc2.

As noted elsewhere, the adapter only gives you a single PCIe lane therefore you only get access to a single TPU on the board. Getting a 20.95ms inference speed compared to 100+ ms on the CPU.

Now the question is…can the other TPU be presented via USB off of the adapter headers to the internal USB headers on the NUC?

I was never able to figure out how to ffprobe the stream but I did find a log from someone else who did manage to do it.

Hi guys,

I have a netatmo camera whose live feed apparently can be accessed by a browser connecting to:

http://CAMERA_IP/ACCESS_KEY/live/files/high/index.m3u8

does someone know if this is supported by frigate?

Im still getting this error multiple times in the log and I not sure how to solve it.
ffmpeg.drivewaycam.detect ERROR : [segment @ 0x56352b04c480] Non-monotonous DTS in output stream 0:0; previous: 75608497, current: 75608490; changing to 75608498. This may result in incorrect timestamps in the output file.
Any thoughts or advice please.

I have two cams working and the other one seems ok.

Ok, after some test with a desktop pc i setup a rpi3+ with usb coral with frigate and 2 ezviz cams.
All is ok (a big big thank you @blakeblackshear for this amazing work… well done) with inference speed about 100ms (from 80ms to 200ms), but i know about the usb2 speed limit and after a testing period i will buy a rpi4 with usb3.

I looked in all thread but i didn’t find an answer about how to delete manually all videos and snapshot. is it possible? or should i wait the automatic jobs to do that?

Thank you

Hi @mr6880… trying to use your command_line sensor to pull the temp of my Coral, but no success - like because I’m running HassOS as a VirualBox VM on my Ubuntu host system. Do you (or anyone) know any workarounds to get access to command_line calls to a host system?

First, big probs to @blakeblackshear for creating this.

Then I have two questions

  1. I have a Interference speed of 130ms. Is this getting better with a coral? And I use the automation from the docs for notification, but the notification is slow. Does this comes from the interference speed?
  2. The automation is firing multiple times within a second. I thought it would fire only once, as long as the object isn’t gone. Am I misunderstanding something?

My setup:
Raspi 4, 4GB
HomeAssistant 64-Bit
core-2021.5.0
Frigate installed via add-ons v1.13

frigate.yml

mqtt:
  host: xxx
  user: xxx
  password: xxx
cameras:
  hof:
    ffmpeg:
      hwaccel_args:
        - -c:v
        - h264_v4l2m2m
      inputs:
        - path: rtsp://user:[email protected]:554/h264Preview_01_main
          roles:
            - clips
        - path: rtsp://user:[email protected]:554/h264Preview_01_sub
          roles:
            - detect
    width: 640
    height: 352
    fps: 5
    objects:
        track:
          - person
          - car
          - dog
    motion:
      mask:
        - 304,0,253,187,201,207,207,352,0,352,0,0 
        - 640,352,525,352,586,0,640,0
      threshold: 50
      contour_area: 150
    zones:  
      strasse:
        coordinates: 312,82,576,88,585,0,332,0  
      einfahrt:
        coordinates: 194,352,524,352,574,86,318,85
    snapshots:
      enabled: true
      retain:
        default: 10
        objects:
          person: 15
    clips:
      enabled: true
      retain: 
        default: 7


detectors:
  cpu1:
    type: cpu
  cpu2:
    type: cpu

Automation

- alias: kamera_hof_benachrichtigung
  id: kamera_hof_benachrichtigung
  description: >-
    Benachrichtigung wenn eine Person in der Einfahrt erkannt wird.
  trigger:
    platform: mqtt
    topic: frigate/events

  condition:
    - "{{ trigger.payload_json['after']['label'] == 'person' }}"
    - "{{ 'einfahrt' in trigger.payload_json['after']['entered_zones'] }}"

  action:
    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: "A {{trigger.payload_json['after']['label']}} has entered the yard."
        data:
          image: "https://l0s78v5e5n18jvi2khsnff0axlg80pnf.ui.nabu.casa/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
          tag: "{{trigger.payload_json['after']['id']}}"

    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: 'Es wurde Bewegung im Hof registriert um {{now().strftime("%H:%M %d-%m-%y")}} '
        data:
          attachment:
            content-type: jpeg
          push:
            badge: 0
            sound:
              name: bewegung_hof
              critical: 1
              volume: 1.0
            category: camera
          entity_id: camera.garten_kamera_hof

The problem is, that the last action part is fired multiple times. Why does this notification comes multiple and the other one not.

Using coral will make big difference. This is my i. speed. :slight_smile:
image

Wonderful. Does this also affect the speed of the automation? So my problem is not that the automation will fire 120ms to late. Sometimes it needs a second or more, and then the person is out of sight.

Or maybe another question, where do I benefit from fast inference speed?
because the load on the raspi is quite low with one camera.

Are you using SSD or SD-card?
SSD will make a huge diff when saving images/files…

Question:
Im thinking about creating a new instance of HA and move Frigate to that PC and use HA remote integration …

Will it work with snapshots and nabucasa?

It will be a 10x speed improvement for inference AND it will eliminate all the CPU usage spikes for object detection. Inference speeds of 130ms can’t add more than 130ms to the notification delay except that it often requires multiple runs of object detection to narrow in on the object in question. This may cause skipping of frames and make it take longer to confirm the object is not a false positive.

Updates are posted to the mqtt topic as better images are found in subsequent frames. Take a look at the event type field to limit it to the initial detection or the end of the detection. Also, see the blueprint posted recently.

That lines up to my 17-19ms with a single USB Coral on a Pi4. Pre-ordered the PoE+ Hat (switch is already PoE+ complaint) so I can run a pair + a USB NVMe hopefully.

I use a SSD.

So maybe the delay comes from skipped frames. The coral should bring a improvement right?

Sorry but I am not that good with mqtt. I understand that there is a event which is only fired once, but how can I change the automation to that?

I think the problem is, that your example automation is replaced when the automation is fired multiple times. Introduction | Home Assistant Companion Docs
The other notification is a critical notification, so it isn’t replaceable. Will test it tomorrow.
Then I still need a workaround for critical notifications. Maybe clearing the old notification before sending the new.

I’m already running the pi4 8gb with the poe hat and coral usb, it works. I’m also using a data ssd. I will be curious to know if poe will be suitable to run another usb coral…
Btw, my main pain is that the pi4 can’t manage the hw accell for the h.264 when the resolution is higher then 1080p (thank you to @mr6880 to spot on that)
So, if you want to use hw acc and keep low cpu usage you have to work with 1080p, if you want to use higher resolution you will increase cpu usage

  • H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode)

@blakeblackshear,any chance to have the h.265 decode support?

Regards

If ffmpeg supports it, so does frigate. It’s possible I may have to compile ffmpeg with another flag, but that’s the benefit of using ffmpeg for intake. It already supports almost every known source of image data.

I would take a look at this blueprint.

You can add another condition to look at the event type like this to only send a single notification when the event ends:

- alias: kamera_hof_benachrichtigung
  id: kamera_hof_benachrichtigung
  description: >-
    Benachrichtigung wenn eine Person in der Einfahrt erkannt wird.
  trigger:
    platform: mqtt
    topic: frigate/events

  condition:
    - "{{ trigger.payload_json['after']['label'] == 'person' }}"
    - "{{ 'einfahrt' in trigger.payload_json['after']['entered_zones'] }}"
    - "{{ trigger.payload_json['type'] == 'end' }}"

  action:
    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: "A {{trigger.payload_json['after']['label']}} has entered the yard."
        data:
          image: "https://l0s78v5e5n18jvi2khsnff0axlg80pnf.ui.nabu.casa/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
          tag: "{{trigger.payload_json['after']['id']}}"

    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: 'Es wurde Bewegung im Hof registriert um {{now().strftime("%H:%M %d-%m-%y")}} '
        data:
          attachment:
            content-type: jpeg
          push:
            badge: 0
            sound:
              name: bewegung_hof
              critical: 1
              volume: 1.0
            category: camera
          entity_id: camera.garten_kamera_hof

Thank you, searched for the blueprint here. Never thought looking kn the blueprint section. I will give this a try !

Something similar?

https://trac.ffmpeg.org/wiki/Encode/H.265


# Getting ffmpeg with libx265 support

ffmpeg needs to be built with the `--enable-gpl` `--enable-libx265` configuration flags and requires `x265` to be installed on your system.