Local realtime person detection for RTSP cameras

It worked after updating the packages.

Specify the PCI device as your detector in the addon config and see if that works.

detectors:
  coral:
    type: edgetpu
    device: pci
1 Like

Thanks, it works BUT with the change in the config.json of the plugin for exposing /dev/apex_0

I created a pull request

Kind regards,

1 Like

Where did you see Coral available in Canada? Since the combination of M.2 coral + plus adapter will be the same as the USB version I am inclined to buy the USB version, but I cannot find available anyhwere.

having a bit of an issue getting this running properly on my server.

The image from the cameraā€™s are all green like some of the images Iā€™ve seen above, however the solution for them was to remove ffmpeg settings, I havenā€™t added any. So not sure where to start.

web_port: 5000
detectors:
  coral:
    type: edgetpu
    device: 'usb:0'
save_clips:
  max_seconds: 300
  clips_dir: /clips
  cache_dir: /cache
mqtt:
  host: 192.168.X.XX
  topic_prefix: frigate
  user: USER
  password: PWD
ffmpeg: {}
cameras:
  pool:
    snapshots:
      show_timestamp: false
    ffmpeg:
      input: rtsp://user:[email protected]:XXX/h264Preview_02_sub
    height: 360
    width: 640
    take_frame: 1
    best_image_timeout: 60
  pool2:
    snapshots:
      show_timestamp: false
    ffmpeg:
      input: rtsp://user:[email protected]:XXX/h264Preview_01_sub
    height: 360
    width: 640
    take_frame: 1
    best_image_timeout: 60
objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 1
      max_area: 1000000
      min_score: 0.1
      threshold: 0.74

Can you post your log output?

On connect called
eating ffmpeg process...
ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:[email protected]:XXX/h264Preview_02_sub -f rawvideo -pix_fmt yuv420p pipe:
Starting detection process: 39
Attempting to load TPU as usb:0
eating ffmpeg process...
ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:[email protected]:XXX/h264Preview_01_sub -f rawvideo -pix_fmt yuv420p pipe:
No EdgeTPU detected. Falling back to CPU.
Camera_process started for pool: 46
Starting process for pool: 46
Camera_process started for pool2: 47
Starting process for pool2: 47
 * Serving Flask app "detect_objects" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off

Try setting your ffmpeg config to:

ffmpeg:
  global_args:
    - -hide_banner
    - -loglevel
    - info

That will enable ffmpeg logging. Post updated logs here after updating.

Hereā€™s a pastebin of the log output

FFmpeg reports a different resolution than what you specified. Try updating your config to this:

web_port: 5000
detectors:
  coral:
    type: edgetpu
    device: 'usb:0'
save_clips:
  max_seconds: 300
  clips_dir: /clips
  cache_dir: /cache
mqtt:
  host: 192.168.X.XX
  topic_prefix: frigate
  user: USER
  password: PWD
ffmpeg: {}
cameras:
  pool:
    snapshots:
      show_timestamp: false
    ffmpeg:
      input: rtsp://user:[email protected]:XXX/h264Preview_02_sub
    height: 352
    width: 640
    fps: 5
    take_frame: 1
    best_image_timeout: 60
  pool2:
    snapshots:
      show_timestamp: false
    ffmpeg:
      input: rtsp://user:[email protected]:XXX/h264Preview_01_sub
    height: 352
    width: 640
    fps: 5
    take_frame: 1
    best_image_timeout: 60
objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 1
      max_area: 1000000
      min_score: 0.1
      threshold: 0.74

That cleared up the image! Itā€™s interesting because I had pulled the height/width directly from the camera.

On restart itā€™s giving me a new error though

On connect called
eating ffmpeg process...
ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:[email protected]:XXX/h264Preview_02_sub -r 5 -f rawvideo -pix_fmt yuv420p pipe:
Starting detection process: 39
Attempting to load TPU as usb:0
eating ffmpeg process...
ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:[email protected]:XXX/h264Preview_01_sub -r 5 -f rawvideo -pix_fmt yuv420p pipe:
No EdgeTPU detected. Falling back to CPU.
Camera_process started for pool: 46
Starting process for pool: 46
Camera_process started for pool2: 47
Starting process for pool2: 47
 * Serving Flask app "detect_objects" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
Error on request:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 323, in run_wsgi
    execute(self.server.app)
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 314, in execute
    for data in application_iter:
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/wsgi.py", line 506, in __next__
    return self._next()
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/wrappers/base_response.py", line 45, in _iter_encoded
    for item in iterable:
  File "/opt/frigate/detect_objects.py", line 464, in imagestream
    frame = object_processor.get_current_frame(camera_name, draw=True)
  File "/opt/frigate/frigate/object_processing.py", line 357, in get_current_frame
    return self.camera_states[camera].get_current_frame(draw)
  File "/opt/frigate/frigate/object_processing.py", line 73, in get_current_frame
    frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_YUV2BGR_I420)
cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-a98tlsvg/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<1>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::FROM_YUV; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 3

If you are using reolink cams, try changing to rtmp URL in place of the rtsp.

And to override the ffmpeg input args for rtmp.

Has anyone tried the new Dual TPU on Key-E?

If so what motherboards is it working on?

On the website it states:

Although the M.2 Specification (section 5.1.2) declares E-key sockets provide two instances of PCIe x1, most manufacturers provide only one. To use both Edge TPUs, be sure your socket connects both instances to the host.

I have been googling for a while but canā€™t tell if my board has 1 lane or 2 lanes to the Key-E slot.

Would it be easier to raise a ticket on GitHub to track working boards?

Just ordered one but have not received it yet. Will update once I receive and install itā€¦but looks like it could be 10-20 day delivery time.

I have the Dual TPU which I connected to a m.2 adapter (which is wired for x2 width). Only one TPU is detected as x1 width which is not downgraded. This means two things. The Dual TPU is really two separate devices, and not one device with x2 width. And to access the second device the PCH will need to be configured to split the m.2 slot into two x1 lanes.

I have another motherboard with a native x2 E-Key m.2 slot. I will test it there, but I expect the same problem. Maybe with some magic setpci command I can split up the lanes.

I might just order one, as it classes for free shipping itā€™s not that much more than a single TPU.

Ā£39.16 for single posted.
Ā£39.60 for dual free post.

I believe itā€™s two separate devices.
So needs two pcie lanes.

Just ordered a Dual, hopefully get it Wed or Thurs?

Yes, it needs two lanes. My point though itā€™s that unless the motherboard UEFI specifically splits up the m.2 port into two x1 lanes (which I donā€™t think any motherboard does this) only one device will be detected.

The Dual TPU also does not have a metal heat sink and needs one to be attached. The Single TPU has a metal cover that acts as the heat sink.

I wouldnā€™t have thought you would need to split them in the BIOS like you do for the main PCIE x16.

Are you sure the adapter wires all the PCIE lanes to the connector?
I have bought a number of M2 to PCIE x4 and often then are only wired for x1 or x2, so had to return them as not suited for what I want.

The main x16 CPU port can be bifurcated (if your UEFI supports that). The m.2 are connected to the PCH which requires something different.

I was looking at some PCH datasheet and it states ā€œHSIO lane configuration and type is statically selected by soft straps, which are managed through the Flash Image Tool, available as part of IntelĀ® CSME FW releases.ā€

I followed the traces on my adapter and both lanes are connected. The other giveaway is that the Dual TPU is shown in lspci as a native x1 port. So that means the second TPU is connected to the second x1 lane only, which means that PCH reconfiguration is needed to expose a second PCIe device. Even with reconfiguration maybe itā€™s not possible because itā€™s connected to the wrong HSIO pins.