Local realtime person detection for RTSP cameras

Here are the lines I would guess you are looking for: https://github.com/blakeblackshear/frigate/blob/ffmpeg-subprocess/frigate/video.py#L16-L50

I haven’t added the ability to pass in separate parameters yet. On the Odroid, the ffmpeg binary is a special build that uses hwaccel by default. Typically, you would add something like the following params to ffmpeg on a x86 Intel machine: -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi

I am still running my 6 camera setup off the NUC. I have been experimenting with different SBCs to see what will work the best as somewhat of an “AI NVR”. Looking for the best combination of hardware accelerated decoding of the stream and usb3 speeds that can get the most out of the Coral. The Odroid is much better than the Coral, but still slower than a NUC. Still trying to figure out if I can improve things on the Odroid. Here are the inference times I have seen so far:

Raspberry Pi 3b+ - 50ms (10-20fps)
Odroid-XU4 - 20ms (25-50fps)
NUC/Laptop - 5ms (100-200fps)

What is the spec of your i3 NUC?

Is your i3 NUC using 2.4 GHz processor and what is the RAM size?

2.4ghz with 8gb of RAM

@tube0013 would you mind sharing your a scrubbed version of your config.yml for your Unifi cameras? I keep getting a connection refused when trying to use docker-compose for my cameras.

Here you go… note that the user and password are totally made up since it doesn’t require them. I think any random word/letter combo would work.

cameras:
  unifi:
    rtsp:
      host: 192.168.1.xxx
      port: 7447
      path: /xxxxxxxxxxxxxx
      user: this
      password: could
    regions:
      - size: 1920
        x_offset: 0
        y_offset: 0
        min_person_area: 5000
        threshold: 0.8

Awesome! And you had to change from Unifi mode to Standalone mode in the camera config to get it to stream RTSP? As for the path, will that show up when I enter the camera address?

I have unifi protect so i get the rtsp stream from there.

Great! Well I think you gave me enough to be dangerous. Thank for the help and quick reply!

It looks like I was able to get connected to my camera, but I got the following error, before the container exited:

tc23@NUC:~/frigate$ docker-compose up
Creating network “frigate_default” with the default driver
Creating camera_1 … done
Attaching to camera_1
camera_1 | On connect called
camera_1 | [mp3 @ 0x3817f40] Header missing
camera_1 | Traceback (most recent call last):
camera_1 | File “detect_objects.py”, line 99, in
camera_1 | main()
camera_1 | File “detect_objects.py”, line 53, in main
camera_1 | cameras[name] = Camera(name, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
camera_1 | File “/opt/frigate/frigate/video.py”, line 137, in init
camera_1 | self.frame_shape = get_frame_shape(self.rtsp_url)
camera_1 | File “/opt/frigate/frigate/video.py”, line 102, in get_frame_shape
camera_1 | frame_shape = frame.shape
camera_1 | AttributeError: ‘NoneType’ object has no attribute ‘shape’

That means that it failed to capture a valid frame from the camera when it was checking the resolution of the feed. Can you open the rtsp url in vlc?

Yep, I am using my hacked Wyze camera for testing purposes. I probably didn’t set the region properly. How do you go about identifying the area that you want container to capture? Meaning is there a way to identify the proper region of interest?

It doesn’t look like it even made it to the part where regions are parsed in that error message, so I don’t think that is the issue. You want a region to be able to capture an entire person in the square. Top left corner is 0,0. Size is width and height of the square. The region should stay completely within the frame.

Ok, I was able to get it to connect properly (I believe), but when I try and access localhost:5000/camera_1, I get the following errors

camera_1 | /usr/local/lib/python3.5/dist-packages/werkzeug/filesystem.py:60: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of ‘ascii’
camera_1 | BrokenFilesystemWarning,
camera_1 | 192.168.86.53 - - [06/Jun/2019 01:35:44] “GET /camera_1 HTTP/1.1” 500 -
camera_1 | Error on request:
camera_1 | Traceback (most recent call last):
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/serving.py”, line 303, in run_wsgi
camera_1 | execute(self.server.app)
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/serving.py”, line 293, in execute
camera_1 | for data in application_iter:
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/wsgi.py”, line 507, in next
camera_1 | return self._next()
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/wrappers/base_response.py”, line 45, in _iter_encoded
camera_1 | for item in iterable:
camera_1 | File “/opt/frigate/detect_objects.py”, line 88, in imagestream
camera_1 | frame = cameras[camera_name].get_current_frame_with_objects()
camera_1 | KeyError: ‘camera_1’
camera_1 | 192.168.86.53 - - [06/Jun/2019 01:35:45] “GET /favicon.ico HTTP/1.1” 500 -
camera_1 | Error on request:
camera_1 | Traceback (most recent call last):
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/serving.py”, line 303, in run_wsgi
camera_1 | execute(self.server.app)
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/serving.py”, line 293, in execute
camera_1 | for data in application_iter:
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/wsgi.py”, line 507, in next
camera_1 | return self._next()
camera_1 | File “/usr/local/lib/python3.5/dist-packages/werkzeug/wrappers/base_response.py”, line 45, in _iter_encoded
camera_1 | for item in iterable:
camera_1 | File “/opt/frigate/detect_objects.py”, line 88, in imagestream
camera_1 | frame = cameras[camera_name].get_current_frame_with_objects()
camera_1 | KeyError: ‘favicon.ico’
camera_1 | Opening the RTSP Url…
camera_1 | Unable to grab a frame

I assume this has something to do with how I am defining the region? Which I am still confused about how I go about finding the proper x&y coordinates (sorry I know I am being dense about it). The image size is 1280x720

My current config.yml looks like this:

web_port: 5000

mqtt:
  host: 192.168.xx.xx
  topic_prefix: frigate

cameras:
  back:
    rtsp:
      user: xxxxx
      host: 192.168.xx.xxx
      port: xxxx
 variable
      password: $RTSP_PASSWORD
      path: /xxxxx
    mask: back-mask.bmp
    regions:
      - size: 350
        x_offset: 0
        y_offset: 300
        min_person_area: 5000
        threshold: 0.5

I

Try http://localhost:5000/back

In your config, you named the camera back, not camera_1.

You’re a champion! Thank you for catching my dumb mistake. I still get a bunch of these messages:

camera_1 | Unable to grab a frame
camera_1 | [mp3 @ 0x2b8f040] Header missing

Is that okay or am I missing something in my config?

If you can see your video feed, you are ok. Those are ffmpeg errors. You may periodically get blips like that in your video feed, but it should handle them. Not all cameras have an error free feed for 100% of frames. Especially if they use wifi.

Amazing, and thanks for the help! Just to confirm, this version only uses the Coral? I just want to make sure I’m not accidentally taxing my CPU (sorry for the dumb questions). Let us know when we can buy you a coffee!