Local realtime person detection for RTSP cameras

I suspect I have the problem noted in the documentation ‘I have clips and snapshots in my clips folder, but I can’t view them in the Web UI.’ but struggling to get it working.

Running NUC/Proxmox/HA OS/Frigate NVR (through supervisor). Setup went well, Frigate is capturing snapshots and clips without issue, I can view/play them through SMB but going through Media Browser causes issues:

Works:
Media Browser>Local Media>Frigate>Clips - I can view all the files, all be it without thumbnails for some reasons. I can then launch the clips without issue. I noticed it’s being played through: https://mydomain.com:8123/media/local/frigate/clips/front_door-1619691568.383451-smnacg.mp4?authSig=(long auth string)

Doesn’t work:
Media Brower> Frigate>Clips - Again I can see all the files, this time with thumbnails! However when launching the file I get 503 error. This time the URL is: https://mydomain.comt:8123/api/frigate/clips/front_door-1619749886.286841-4tpuco.mp4?authSig=(long auth string)

If accessing the media through the Frigate UI also get 503.

Guessing I need to mount the folder as per the FAQ, but struggling to understand exactly what is needed: 'try mounting a volume to /media/frigate inside the container instead of /media/frigate/clips

The current and I’m guessing default mount within the Frigate container is: /mnt/data/supervisor/media > /media

Any suggestions what I have done wrong? Thx.

So if you have h264 i have no idea why you have this problem with the permission.
You have 2 options with my tutorial
1:
If you have a NAS you can try to mount media to network share
2:
Go to the host machine and check if there is a permission error, if it is , just do as in tutorial but don’t change fstab, but change the /mnt/data/supervisor/media permission with chmod +x
Tutorial:
https://community.home-assistant.io/t/solved-hassos-mount-nas-network-share/

I’m pretty sure you cant change anything in frigate addon as it recreates it self each time.I spend to much time ti figuring it out.

Thanks @ukro, I came across your tutorial while trying to get this working. Nice post.

However, in this instance I would prefer to keep storage on the host HA OS rather than complicating things by bringing the NAS into play.

1 Like

Thank you, then go now to host machine and check the

ls -la /mnt/data/supervisor/;ls -la /mnt/data/supervisor/media/

and check if it has +x or paste here i will tell you

Anything jump out?

drwxr-xr-x   12 root     root          4096 Apr 30 08:13 .
drwxr-xr-x    6 root     root          4096 Apr  7 21:52 ..
drwxr-xr-x    6 root     root          4096 Feb 26 05:31 addons
-rw-------    1 root     root         63499 Apr 30 08:13 addons.json
drwxr-xr-x    4 root     root          4096 Feb 26 05:31 apparmor
drwxr-xr-x    4 root     root          4096 Feb 26 05:31 audio
-rw-------    1 root     root            75 Apr 27 10:55 audio.json
-rw-------    1 root     root           140 Mar 26 11:06 auth.json
drwxr-xr-x    2 root     root          4096 Apr 29 01:19 backup
-rw-------    1 root     root           207 Apr 27 10:55 cli.json
-rw-------    1 root     root           666 Apr 30 08:07 config.json
-rw-------    1 root     root           874 Mar 12 08:01 discovery.json
drwxr-xr-x    2 root     root          4096 Apr 27 10:55 dns
-rw-------    1 root     root            90 Apr 27 10:55 dns.json
-rw-------    1 root     root            22 Mar  9 22:57 docker.json
d---------   21 root     root          4096 Apr 30 08:45 homeassistant
-rw-------    1 root     root           549 Apr 30 08:07 homeassistant.json
-rw-------    1 root     root          2733 Apr 27 10:55 ingress.json
d---------    5 root     root          4096 Apr 30 08:45 media
-rw-------    1 root     root            79 Apr 27 10:55 multicast.json
-rw-------    1 root     root           212 Apr 27 10:55 observer.json
-rw-------    1 root     root           453 Apr 27 10:55 services.json
d---------    4 root     root          4096 Apr 18  2020 share
d---------    2 root     root          4096 Apr 27 10:56 ssl
drwxr-xr-x    2 root     root          4096 Apr 29 11:04 tmp
-rw-------    1 root     root            98 Feb 26 06:25 tmptgn1rqw2
-rw-------    1 root     root           745 Apr 30 08:07 updater.json

d---------    5 root     root          4096 Apr 30 08:43 .
drwxr-xr-x   12 root     root          4096 Apr 30 08:13 ..
drwxr-xr-x    2 root     root          4096 Apr 29 12:11 clips
-rw-r--r--    1 root     root             0 Apr 30 08:43 eoinfile
drwxr-xr-x    4 root     root          4096 Apr  5 09:35 frigate
drwxr-xr-x    2 root     root          4096 Apr 29 12:11 recordings

Now thats interesting, i hope this doesn’t happen to me D:
If you want i can skype/anydesk to your PC to figure out. But i would need access to the host machine aswell. If you would not be confortable, its okay :slight_smile: i will not judge :heart:

Thank you for the offer but not needed, you got me over the line. I was able to reset permission on the /media folder and we are up and running. Thx!

1 Like

you are welcome :heart:

Hi i finally got the USB coral :smiling_face_with_three_hearts:

I’m running a Raspi 4 8gb with Raspi x64 OS, the frigate docker container is (frigate:stable-aarch64) and the USB coral

I still don’t understand if I need or not the hwaccel_args

    ffmpeg:
      hwaccel_args:
        - -c:v
        - -h264_v4l2m2m

everything works without, if I configure it:

frigate_camera_tpu | frigate.edgetpu                INFO    : Attempting to load TPU as usb
frigate_camera_tpu | frigate.video                  INFO    : camera1: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate_camera_tpu | frigate.video                  INFO    : camera1: ffmpeg process is not running. exiting capture thread...
frigate_camera_tpu | frigate.mqtt                   INFO    : MQTT connected
frigate_camera_tpu | frigate.edgetpu                INFO    : TPU found
frigate_camera_tpu | ffmpeg.camera1.detect          ERROR   : [h264 @ 0x5586e6fb60] non-existing PPS 0 referenced

do I need to configure it? and if yes where?
here my camera stream

image

this is my config
width: 1280
height: 720

regards

@blakeblackshear
Great work, have Frigate running on my HA and it’s working alot better then Deepstack and EyeMotion.
Atm. i’m trying to send a message to telegram (Node Red - Call Service node)

But it’s not working.

Can someone guide me how to write the JSON code in the Call Service Node?
atm. I have this.

{
  "message": "A {{trigger.payload_json["after"]["label"]}} was detected!",
  "data": {
    "photo": [
      {
        "file": "http://ccab4aaf-frigate:5000/api/events/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg",
        "caption": "A {{trigger.payload_json["after"]["label"]}} was detected on {{ trigger.payload_json["after"]["camera"] }} camera"
     }
   ]
  }
}

If the images was in a folder, I know this will work, but now it’s from the mqtt and im connected to the database.

{

"message": "Detection Foran!",

"data": {

"photo": [

{

"file": "/config/www/snapshots/foran_detection_latest.jpg",

"caption": "Der blev fanget en eller flere personer foran!"

}

]

}

}

I posted a new discussion topic on GitHub about a companion service I am building for training better models. Feedback is appreciated.

3 Likes

Same here on this topic… we all need you I think but the hard questions remain unanswered :sleepy:

Hi all, new to frigate and, wow massively impressed, wyze camera working great, looks like goodbye to the monthly subscription.

I have a query on duplicated events over the same time period, e.g. just now had 3 events as follows:

9.11.05 - 9.11.30
9.11.05 - 9.11.30
9.11.01 - 9.11.45

I’d expect just the one event (the bottom one) as it overlaps the others. Is there something I can tweak to prevent multiple redundant HA notifications?

I’m using 2 CPU detectors, full config here:

mqtt:
  host: local.mqtt.server
cameras:
  Driveway:
    ffmpeg:
      hwaccel_args:
        - -hwaccel
        - vaapi
        - -hwaccel_device
        - /dev/dri/renderD128
        - -hwaccel_output_format
        - yuv420p
      inputs:
        - path: rtsp://camera/live
          roles:
            - detect
            - clips
        - path: rtsp://@camera/live  
          roles:         
            - rtmp
        - path: rtsp://camera/live
          roles:
            - record
    width: 1920
    height: 1080
    fps: 15
    clips:
      enabled: true
      pre_capture: 5
      post_capture: 5
    record:
      enabled: true
      retain_days: 15
    objects:
      track:    
       - person
       - dog
       - cat
      filters:
       dog:
        min_score: 0.5
        threshold: 0.6
       cat:
        min_score: 0.5
        threshold: 0.6
       person:
        min_score: 0.5
        threshold: 0.7
    snapshots:
      enabled: True
      timestamp: True
      bounding_box: False
      crop: False
      required_zones: []
      retain:
        default: 10
        objects:
          person: 15
detectors:
  cpu1:
   type: cpu
  cpu2:
   type: cpu
1 Like

Tell me about it :smiley: i went not buying dahua nvr for 250$ and just using this on proxmox. very happy with the results :slight_smile:

Also, does anyone know if take_frame parameter still woks? I’ve been trying it under ffmpeg: at same level as fps but getting an invalid field exception. I guess would be useful for cams like wyze that can’t throttle fps.

Thanks.

I’ve been looking at the same thing, and it kinda makes sense why it may not be there anymore. Though I did have some questions about how the fps parameter works underneath.

So the problem with setting an fps parameter or a take_frame parameter comes down to how the h264 stream is setup. Now bear in mind, I am by no means an expert here, but you will see things like the iframe interval in your h264 setup on some cameras (some may not let you configure this), this comes into play because h264 uses an iframe and then kind of the differences between each frame to construct the frames between iframes.

Using an example, if you set your iframe interval to 20, and your fps is 20, then every 20 frames will be a new iframe. This is great for saving maximum bandwidth, but the problem comes that if you want to decode frame 15 of that stream, you need to decode the first frame (iframe) and then the next 14 frames in sequence to build that 15th frame.

So the problem comes that if you from the client side say “I only want 5 fps” but the frame rate of the camera is 20, and the iframe interval is 20, then you have a problem that to generate 5fps, you have to basically decode the entire thing anyway, because to get 5 frames per second as an output, you have had to decode 20 frames because of the iframe interval.

Funnily enough, I have been experimenting with this recently as a way of using my high-resolution stream as the source for the object detection, as in a few cases I have missed hits because the object was quite far away, think wide angle CCTV cameras. With the low res stream, it doesn’t have enough resolution to detect objects that are far away, but using the high resolution stream it does. The problem is I don’t really want to decode the entire high res stream for this at full frame rate as it absolutely slaughters the gpu decoder (intel quicksync) trying to decode 12 cameras at full 2K resolution, compared to using the low res.

I was actually about to write a full post about asking for more information about how the fps flag is handled internally so this is probably a good time. My thought process was, can I use an iframe interval of say 4, with a framerate from the camera of 20, to then use the fps parameter of 5 in frigate to effectively give me my 5 fps rate for decoding, but without the gpu/cpu hit. Because then ffmpeg/frigate only needs to decode the first full frame every 4 frames to give me a motion detection stream based on 5fps, but then trigger the full stream to be recorded at the full 20fps in BlueIris. e.g. I am using Node-Red to look at the object detection MQTT stream from frigate, to then tell BlueIris when to record.

1 Like

The fps parameter is very simple. It just passes a parameter to ffmpeg to specify the output frame rate. It replaced take_frame because it is more flexible and drops frames further upstream.

Your understanding of h264 is correct as far as I know. You can tell ffmpeg to skip decoding for everything except iframes with the skip_frame parameter. This should dramatically reduce the resources required to decode because it is just ignoring the differential frames in between. My Dahua cameras do not let me set an iframe rate lower than the fps, so that would max out my detection fps to 1fps. Also keep in mind that more iframes mean less efficient compression, so your mp4 files will be larger. I wish there were cameras that supported raw uncompressed image data.

1 Like

Cheers for the detailed response, will have a bit more of a play tomorrow when I have some more time. But you are absolutely correct about the image quality, lowering the iframe interval vastly decreased the quality of the image. For example moving from an iframe interval of 20 with a framerate of 20, to an iframe interval of 4 with framerate 20, needed 4-5x the bandwidth to get the same image quality. So it is definitely a trade off, but worth investigating to find a balance in being able to process the motion, and store. As ultimately I can post-process the stored data down to a smaller file-size when in more idle times.

Another area I had been playing with was H264 SVC which from my understanding is meant to do away with the need for multiple streams for lower resolution. In that you can have a single stream but have kind of layers that increase resolution or quality. Though getting clear documentation on what exactly this means from the CCTV vendors is proving to be a bit of a pain, that and making sure it’s actually supported with the version of Quicksync/hardware decoding as there would be no point if I lose hardware acceleration. From the details I have seen, the vendor support is very much a mixed bag, some implementing it only for framerate, but interesting to play with.

Complexity generally increases the amount of processing and CPU usage. My gut says H264 SVC is a dead end.

Since this is a MONSTER Thread, and I really would like some actual help I will open a separate thread and would like to ask if I can please get some help on-topic…: