Local realtime person detection for RTSP cameras

It will be a 10x speed improvement for inference AND it will eliminate all the CPU usage spikes for object detection. Inference speeds of 130ms can’t add more than 130ms to the notification delay except that it often requires multiple runs of object detection to narrow in on the object in question. This may cause skipping of frames and make it take longer to confirm the object is not a false positive.

Updates are posted to the mqtt topic as better images are found in subsequent frames. Take a look at the event type field to limit it to the initial detection or the end of the detection. Also, see the blueprint posted recently.

That lines up to my 17-19ms with a single USB Coral on a Pi4. Pre-ordered the PoE+ Hat (switch is already PoE+ complaint) so I can run a pair + a USB NVMe hopefully.

I use a SSD.

So maybe the delay comes from skipped frames. The coral should bring a improvement right?

Sorry but I am not that good with mqtt. I understand that there is a event which is only fired once, but how can I change the automation to that?

I think the problem is, that your example automation is replaced when the automation is fired multiple times. Introduction | Home Assistant Companion Docs
The other notification is a critical notification, so it isn’t replaceable. Will test it tomorrow.
Then I still need a workaround for critical notifications. Maybe clearing the old notification before sending the new.

I’m already running the pi4 8gb with the poe hat and coral usb, it works. I’m also using a data ssd. I will be curious to know if poe will be suitable to run another usb coral…
Btw, my main pain is that the pi4 can’t manage the hw accell for the h.264 when the resolution is higher then 1080p (thank you to @mr6880 to spot on that)
So, if you want to use hw acc and keep low cpu usage you have to work with 1080p, if you want to use higher resolution you will increase cpu usage

  • H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode)

@blakeblackshear,any chance to have the h.265 decode support?

Regards

If ffmpeg supports it, so does frigate. It’s possible I may have to compile ffmpeg with another flag, but that’s the benefit of using ffmpeg for intake. It already supports almost every known source of image data.

I would take a look at this blueprint.

You can add another condition to look at the event type like this to only send a single notification when the event ends:

- alias: kamera_hof_benachrichtigung
  id: kamera_hof_benachrichtigung
  description: >-
    Benachrichtigung wenn eine Person in der Einfahrt erkannt wird.
  trigger:
    platform: mqtt
    topic: frigate/events

  condition:
    - "{{ trigger.payload_json['after']['label'] == 'person' }}"
    - "{{ 'einfahrt' in trigger.payload_json['after']['entered_zones'] }}"
    - "{{ trigger.payload_json['type'] == 'end' }}"

  action:
    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: "A {{trigger.payload_json['after']['label']}} has entered the yard."
        data:
          image: "https://l0s78v5e5n18jvi2khsnff0axlg80pnf.ui.nabu.casa/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
          tag: "{{trigger.payload_json['after']['id']}}"

    - service: notify.mobile_app_suedpack_iphone
      data_template:
        message: 'Es wurde Bewegung im Hof registriert um {{now().strftime("%H:%M %d-%m-%y")}} '
        data:
          attachment:
            content-type: jpeg
          push:
            badge: 0
            sound:
              name: bewegung_hof
              critical: 1
              volume: 1.0
            category: camera
          entity_id: camera.garten_kamera_hof

Thank you, searched for the blueprint here. Never thought looking kn the blueprint section. I will give this a try !

Something similar?

https://trac.ffmpeg.org/wiki/Encode/H.265


# Getting ffmpeg with libx265 support

ffmpeg needs to be built with the `--enable-gpl` `--enable-libx265` configuration flags and requires `x265` to be installed on your system.

It’s already in there: frigate/Dockerfile.ffmpeg.amd64 at 09a4d6d030fec2f9e2d3e8be1fcb5560cd66414d · blakeblackshear/frigate · GitHub

RC2 of Hassos now support coral m.2?

1 Like

Cool, what should be the hwaccel_args i need to add to use it?

Hi all,

There has been couple of other posts regarding hardware acceleration problems with Raspberry Pi.
I have also facing same issues. When I have the hardware acceleration configuration in frigate yml file, I will get the green screen and also plenty of different errors. If I will delete the hardware acceleration config, I will not get any errors and stream is working.

Does anybody have advice?

What I have checked:
Stream settings in camera and Frigate
Protection mode is disabled in Frigate Add-On

My setup:
Raspberry Pi 4 (4GB)
Home Assistant OS 5.13 (64-bit)
Frigate Add-on 1.13

Error logs from Frigate Add-on:

frigate.video                  INFO    : frontyard: ffmpeg process is not running. exiting capture thread...
ffmpeg.frontyard.detect        ERROR   : [h264_v4l2m2m @ 0x5590337990] Could not find a valid device
ffmpeg.frontyard.detect        ERROR   : [h264_v4l2m2m @ 0x5590337990] can't configure decoder
ffmpeg.frontyard.detect        ERROR   : Error while opening decoder for input stream #0:0 : Operation not permitted
frigate.video                  INFO    : frontyard: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have

Frigate camera configuration:

cameras:
  frontyard:
    ffmpeg:
      hwaccel_args:
        - -c:v
        - h264_v4l2m2m
      inputs:
        - path: rtsp://username:[email protected]:XXXX/unicast
          roles:
            - detect
            - clips
            - rtmp
    width: 1280
    height: 720
    fps: 5
    objects:
      track:
        - person
        - car
        - dog
        - motorcycle
        - bicycle
        - cat
    snapshots:
      enabled: true
      timestamp: true
      bounding_box: true
      retain:
        default: 5
    clips:
      enabled: true
      retain:
        default: 5

Exact, i report the same thing, i have try a lot of thing but nothing work for me.
Work only when i remove the hardware acceleration but the CPU is consumming a lot.

My setup:
Raspberry Pi 4 (4GB)
Debian buster 10
Docker Home Assistant
Frigate Add-on 1.13

Good to know that I’m not the only one.

Now I started to think about this sentence from the Frigate documentation page:

Raspberry Pi 3/4 (64-bit OS) NOTICE : If you are using the addon, ensure you turn off Protection mode for hardware acceleration.

I have always thought that this is the same thing as turn off protection mode in the add-on.
But is it the same thing?

Correct. I upgrade to rc3 on a NUC 8 yesterday and no problems so far.

1 Like

I run HassOS in a Virtualbox VM but not Frigate in it’s own with a Wyze cam, so I’m not sure if this applies to you or not but I have no issues. What is your config? I would start with no HW accel and see if that works. The VM can add some unnecessary layers for accessing some of the underlying hardware, though I do use these on my 6th gen i3:

          hwaccel_args:
            - -hwaccel
            - qsv
            - -qsv_device
            - qsv

One thing I’ve noticed over the past (almost) year is that the Wyze RTSP might be the culprit - it’s terribly unstable at times, causing the lvalue and rvalue issue to pop. Meanwhile, I’ve tested other means of capturing the video through TinyCam and streaming the mjpeg stream and TinyCam–>Blue Iris–>Frigate. Those two work much better, but require more overhead. I wish TinyCam could restream RTSP instead of mjpeg since it uses the Wyze credentials to pull the video, maybe one day.

The RTSP firmware works for the most part directly with Frigate, but admittedly there were a couple of times it may have skipped frames and didn’t catch what it was supposed to. While the Wyze are nice and cheap, the RTSP firmware can be tough to work with at times.

I need some help…
When taking the latest picture from MQTT (frigate//car/snapshot) I always get bounding boxes ,timestamp and croped…
How do I disable them in snapshots from MQTT?
But when I view the same event in media browser the image is clear from bbox, crop and timestamp.

Modify these parameters.

image

Done that… Still have issues… :frowning:

#############################
#        PARKERING          #
#############################
  frigate_parkering:
    ffmpeg:
      inputs:
        - path: rtmp://192.168.1.216/bcs/channel0_main.bcs?channel=0&stream=1&user=admin&password=XXX
          roles:
            - clips
            - rtmp
#            - detect
        - path: rtmp://192.168.1.216/bcs/channel0_sub.bcs?channel=0&stream=1&user=admin&password=XXX
          roles:
            - detect
    width: 640
    height: 480
#    width: 2560
#    height: 1920
    fps: 4
    detect:
      enabled: true
    clips:
      enabled: true
      retain:
        default: 2      
    snapshots:
      enabled: True
      timestamp: False
      bounding_box: False
      height: 960
      crop: False
      retain:
        default: 2      
    objects:
      track:
        - person
        - cat
        - dog
        - bicycle
        - truck
        - car
      filters:
        person:
          min_score: 0.6
        car:
          min_score: 0.6
          min_area: 24000
    motion:
      mask:
        - 0,65,301,88,436,109,589,132,640,153,640,348,640,404,640,480,640,0,0,0
      threshold: 25

For MQTT images, you have to modify the settings under mqtt. See the full example here: https://blakeblackshear.github.io/frigate/configuration/cameras#full-example

The snapshot settings only apply to the files saved to disk.