Local realtime person detection for RTSP cameras

I have the Dual TPU which I connected to a m.2 adapter (which is wired for x2 width). Only one TPU is detected as x1 width which is not downgraded. This means two things. The Dual TPU is really two separate devices, and not one device with x2 width. And to access the second device the PCH will need to be configured to split the m.2 slot into two x1 lanes.

I have another motherboard with a native x2 E-Key m.2 slot. I will test it there, but I expect the same problem. Maybe with some magic setpci command I can split up the lanes.

I might just order one, as it classes for free shipping it’s not that much more than a single TPU.

£39.16 for single posted.
£39.60 for dual free post.

I believe it’s two separate devices.
So needs two pcie lanes.

Just ordered a Dual, hopefully get it Wed or Thurs?

Yes, it needs two lanes. My point though it’s that unless the motherboard UEFI specifically splits up the m.2 port into two x1 lanes (which I don’t think any motherboard does this) only one device will be detected.

The Dual TPU also does not have a metal heat sink and needs one to be attached. The Single TPU has a metal cover that acts as the heat sink.

I wouldn’t have thought you would need to split them in the BIOS like you do for the main PCIE x16.

Are you sure the adapter wires all the PCIE lanes to the connector?
I have bought a number of M2 to PCIE x4 and often then are only wired for x1 or x2, so had to return them as not suited for what I want.

The main x16 CPU port can be bifurcated (if your UEFI supports that). The m.2 are connected to the PCH which requires something different.

I was looking at some PCH datasheet and it states “HSIO lane configuration and type is statically selected by soft straps, which are managed through the Flash Image Tool, available as part of Intel® CSME FW releases.”

I followed the traces on my adapter and both lanes are connected. The other giveaway is that the Dual TPU is shown in lspci as a native x1 port. So that means the second TPU is connected to the second x1 lane only, which means that PCH reconfiguration is needed to expose a second PCIe device. Even with reconfiguration maybe it’s not possible because it’s connected to the wrong HSIO pins.

Would you mind sharing the link of the adapter you are using?

https://www.mouser.com/ProductDetail/Coral/G650-06076-01?qs=W%2FMpXkg%2BdQ6LZJp2eyeh4w%3D%3D

Note that your zone is being draw with an X. Your coordinates are not in the right order to make a rectangle. Try this:

    zones:
      ignore:
        coordinates:
          - '1,1'
          - '2560,1'
          - '2560,570'
          - '1,570'

I took a look at several recent issues and other open source projects and revamped the docs. Feedback appreciated:

2 Likes

Thank you @rpress, but I meant which M.2 adapter you are using for the coral dual edgetpu.

once again, amazing work, thank you for your effort and dedication

https://www.amazon.com/dp/B07SN3QYR9

They are reolink cams, but they’re the B800’s so they don’t have native RTSP/RTMP support. I’m using the RTSP stream that’s output from the NVR and it doesn’t have RTMP.

The camera’s work good now, and I am getting person detection just fine. Just getting the error on request in the logs now. I wonder if it has to do with the saving of the images? My clips and cache dir are both empty.

Error on request:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 323, in run_wsgi
    execute(self.server.app)
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 314, in execute
    for data in application_iter:
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/wsgi.py", line 506, in __next__
    return self._next()
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/wrappers/base_response.py", line 45, in _iter_encoded
    for item in iterable:
  File "/opt/frigate/detect_objects.py", line 464, in imagestream
    frame = object_processor.get_current_frame(camera_name, draw=True)
  File "/opt/frigate/frigate/object_processing.py", line 357, in get_current_frame
    return self.camera_states[camera].get_current_frame(draw)
  File "/opt/frigate/frigate/object_processing.py", line 73, in get_current_frame
    frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_YUV2BGR_I420)
cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-a98tlsvg/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<1>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::FROM_YUV; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 3
Error on request:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 323, in run_wsgi
    execute(self.server.app)
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 315, in execute
    write(data)
  File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 296, in write
    self.wfile.write(data)
  File "/usr/lib/python3.8/socketserver.py", line 799, in write
    self._sock.sendall(b)
OSError: [Errno 113] No route to host

The key message in the logs is “Invalid number of channels in input image”. OpenCV is erroring when trying to convert the video frames from YUV to BGR so it can return the mjpeg feed. Someone else had the same issue on GH, but I’m not sure what is causing it.

1 Like

Hey Blake,

One more quick question about zones. Is

frigate/<zone_name>/<object_name>/snapshot

an MQTT topic that we can subscribe to get snapshots for zones? If not, would you consider it as an FR?

It isn’t currently, but yes. I want that feature too.

1 Like

@blakeblackshear The new docs are great!

I’m having an issue getting hardware acceleration working on my Intel NUC 10i7FNH using docker. Edit: It looks like an i965 VAAPI library is missing - any advice on how to add this to the docker container? When i turn info logs on I see:

Frigate Log Files (INFO Log Detail)

Stream mapping:

  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))

Device setup failed for decoder on input stream #0:0 : Input/output error

garage: ffmpeg sent a broken frame. something is wrong.

garage: ffmpeg process is not running. exiting capture thread...

Input #0, rtsp, from 'rtsp://USER:PASS@IP:554/axis-media/media.amp':

  Metadata:

    title           : Session streamed with GStreamer

    comment         : rtsp-server

  Duration: N/A, start: 1603633834.541189, bitrate: N/A

    Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 10 tbr, 90k tbn, 180k tbc

[AVHWDeviceContext @ 0x555a50b567e0] libva: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed

[AVHWDeviceContext @ 0x555a50b567e0] Failed to initialise VAAPI connection: -1 (unknown libva error).

Device creation failed: -5.

Config.yml Global Config

  hwaccel_args:
    - '-hwaccel'
    - vaapi
    - '-hwaccel_device'
    - /dev/dri/renderD128
    - '-hwaccel_output_format'
    - yuv420p

VAINFO Command on Host:

vainfo: VA-API version: 1.7 (libva 2.6.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 20.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :	VAEntrypointVLD
      VAProfileMPEG2Main              :	VAEntrypointVLD
      VAProfileH264Main               :	VAEntrypointVLD
      VAProfileH264Main               :	VAEntrypointEncSliceLP
      VAProfileH264High               :	VAEntrypointVLD
      VAProfileH264High               :	VAEntrypointEncSliceLP
      VAProfileJPEGBaseline           :	VAEntrypointVLD
      VAProfileJPEGBaseline           :	VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline:	VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:	VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          :	VAEntrypointVLD
      VAProfileHEVCMain               :	VAEntrypointVLD
      VAProfileHEVCMain10             :	VAEntrypointVLD
      VAProfileVP9Profile0            :	VAEntrypointVLD
      VAProfileVP9Profile2            :	VAEntrypointVLD

VAINFO Command on Docker

libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

Can you tell me more about your host machine?