Local realtime person detection for RTSP cameras

16ms should be in the 50fps ballpark, but other things could impact it as well. I won’t know for sure until I try it obviously. I can’t think of a reason frigate would choke on a portrait orientation. Anything in the logs?

No, the log just stops showing any new events and I can’t open the cameras… Also getting cases where the camera is there but no bounding boxes are showing…

Any hints for running on synology? Did you set it all up using the shell or do you use the synology UI? How much CPU is it using?

i used the shell to install it, using this command.

docker run -d --name frigate --privileged --restart unless-stopped --device /dev/dri:/dev/dri -v /dev/bus/usb:/dev/bus/usb -v /volume1/apps/configs/frigate/config:/config:ro -v /volume1/motion/storage/frigate:/storage:rw -p 4444:4000 blakeblackshear/frigate

Running 5 cameras at the moment at about 8fps per camera on the low quality setting, averages about 20-23% cpu. I tried using a vaapi enabled ffmpeg build as well to offload the processing but never got it to work reliably.

1 Like

First you pulled the repository any built the docker container, though. right?

No, just pulled the pre built version. I did try building the container on the Nas but it failed and I moved on

I ended up forking the repo and doing my own build, which works well. You can try that if you like, replace the repo name with sneighbour/frigate

Okay, as I have the identical model, I will try yours if it’s working well for you. Thanks.

Here is my Synology export, this might be easier for you? You can import this directly into the gui if you want, i’d change

“image” : “sneighbour/frigate:latest”,

to

“image” : “blakeblackshear/frigate:latest”,

though if i were you, to ensure you get updates. I’ve also changed port mappings and added a few extra folders so i could mess with the python scripts locally

{
   "cap_add" : null,
   "cap_drop" : null,
   "cmd" : "python3 -u detect_objects.py",
   "cpu_priority" : 50,
   "devices" : [
      {
         "CgroupPermissions" : "rwm",
         "PathInContainer" : "/dev/dri",
         "PathOnHost" : "/dev/dri"
      }
   ],
   "enable_publish_all_ports" : false,
   "enable_restart_policy" : true,
   "enabled" : true,
   "env_variables" : [
      {
         "key" : "TZ",
         "value" : "Australia/Brisbane"
      },
      {
         "key" : "PATH",
         "value" : "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
      },
      {
         "key" : "PYTHONPATH",
         "value" : ":/usr/local/lib/python3.5/dist-packages/tensorflow/models/research:/usr/local/lib/python3.5/dist-packages/tensorflow/models/research/slim"
      }
   ],
   "exporting" : false,
   "id" : "74bb380ce3429d1460d16d4b45529b76e13123a4a3d0ea77684ee8e26339cc1e",
   "image" : "sneighbour/frigate:latest",
   "is_ddsm" : false,
   "is_package" : false,
   "links" : [],
   "memory_limit" : 0,
   "name" : "frigate-vaapi",
   "network" : [
      {
         "driver" : "bridge",
         "name" : "bridge"
      }
   ],
   "network_mode" : "default",
   "port_bindings" : [
      {
         "container_port" : 4000,
         "host_port" : 4444,
         "type" : "tcp"
      }
   ],
   "privileged" : true,
   "shortcut" : {
      "enable_shortcut" : false
   },
   "ulimits" : null,
   "use_host_network" : false,
   "volume_bindings" : [
      {
         "host_volume_file" : "/apps/configs/frigate/config",
         "mount_point" : "/config",
         "type" : "ro"
      },
      {
         "host_volume_file" : "/apps/configs/frigate",
         "mount_point" : "/opt/frigate",
         "type" : "rw"
      },
      {
         "host_volume_file" : "/motion/storage/frigate",
         "mount_point" : "/storage",
         "type" : "rw"
      },
      {
         "host_absolute_path" : "/dev/bus/usb",
         "mount_point" : "/dev/bus/usb",
         "type" : "rw"
      }
   ],
   "volumes_from" : null
}

1 Like

Could you add the scripts folder (specifically) scripts/install_edgetpu_api.sh to your ffmpeg-subprocess branch? It’s called out in the Dockerfile. I’m trying to debug my size issue while decreasing the framerate from the ffmpeg subprocess.

Or any other changes that might help make it work. It’s hitting a lot of dead ends on the current branch state getting it set up, but I remember you mentioning you had some local stuff done that you hadn’t pushed a while back.

happy to add it but i don’t see where i’ve included/referenced that file, am i missing it?

It’s likely better to wait for a new official build, mine is working but hacked to be way different that the legit frigate

Oops. Sorry. I should have been more specific. I was asking if @blakeblackshear could add the scripts folder on his partial branch.

Unless he’s going a different route. But I was hoping to use that branch so I could limit the framerate with the ffmpeg -v command (since my cameras can’t adjust their framerate).

Not sure how I missed that. I can add it tonight.

1 Like

haha, that makes more sense. Sorry, i thought you were replying to my Synology specific comments!

Yeah, i noted that was missed and just ignored it - my hacked up branch just copies the correct files into place, without running the script. I believe the end result is the same.

I just added the scripts. Not sure if using ffmpeg to limit the frame rate will work like you want, so I went ahead and implemented a frame rate limiter on that branch too. I haven’t tested it much, but if you add a take_frame parameter to your config for a camera, it should process every nth frame. For example, if it is set to 2, it will process every 2nd frame, set to 3 will get every 3rd frame. See example here: https://github.com/blakeblackshear/frigate/blob/4ce6f657a1dca1280587e9df3d8b45782a8fe8a5/config/config.yml#L19

1 Like

Thanks for the commits! I’ll see if I can get a look at it today. Sorry about my wave of questions. I’m always trying to learn why decisions were made.

2 questions. Is the argument default value take_frame=1 necessary:

When the default value is already set here?:

Or is it guarding against other issues/readability.

What’s the difference between take_frame and adding a ffmpeg -r X option? Is that because pulling frames is considered a lossless conversion versus re-encoding the stream to a
lower framerate? Not to mention, it’s probably more performant?

My (probable lossy) method:

https://github.com/aav7fl/frigate/blob/d65e9321913fb8639f453a7b4a7f652100599778/frigate/video.py#L34

I’ll see if I can get a PR out there tonight to add in the label to that branch to resolve my own issue (https://github.com/blakeblackshear/frigate/issues/42).

I’m thinking of adding a new field to the obj here that contains the ['person_size'] so we don’t need to re-caculate it again when it gets passed into get_current_frame_with_objects(self).

Setting it here:

and using it here:

I opened up a PR to add the area labels to the bounding boxes (from the ffmpeg branch).

https://github.com/blakeblackshear/frigate/pull/47

It’s now working great! Thanks to your latest changes with take_frame I have a lower framerate, more zones, and now my newly added area label.

Turns out I was wayyy underestimating my person area.

This also helped bring the load on my system from 5-6 to 1-2. Thanks!

To answer your previous questions:

  1. Technically, the take_frame=1 default value isn’t necessary because I am never calling that function without passing a value. It just ensures the default is full speed if I use it differently in the future.
  2. I am not sure exactly what the -r option is doing for ffmpeg, but I wouldn’t be surprised if it just limited the speed at which it processed frames. You really want it to skip/drop frames and move on as quickly as possible. There is probably some kind of optimized way to do that in ffmpeg, and it may be more efficient than what I am currently doing. The problem is always that you must decode every I-frame in an H264 stream, or you can’t calculate a full frame for the other frames.

Is it possible to get the person_score or confidence threshold passed through in the mqtt message as well? If so, I’m thinking I could use that score in a template_value in the binary sensor to create a dynamic/time of day min_threshold?

That is actually how I initially implemented it. The problem is that the value changes on almost every frame when someone is detected. That resulted in frigate flooding MQTT. What if you could define your own score thresholds such as low, medium, high for each camera? Then you could dynamically choose the score threshold that would trigger an event however you want.

Is that low, medium, high function currently available? I know that I can set different thresholds for each camera, but I’m have an issue with my courtyard camera “seeing” a person at night when there is no one there. I’m thinking it’s a function of the IR. So I would like to set it low/medium during the day and high at night. Does that make sense?