Although the M.2 Specification (section 5.1.2) declares E-key sockets provide two instances of PCIe x1, most manufacturers provide only one. To use both Edge TPUs, be sure your socket connects both instances to the host.
I have been googling for a while but can’t tell if my board has 1 lane or 2 lanes to the Key-E slot.
Would it be easier to raise a ticket on GitHub to track working boards?
I have the Dual TPU which I connected to a m.2 adapter (which is wired for x2 width). Only one TPU is detected as x1 width which is not downgraded. This means two things. The Dual TPU is really two separate devices, and not one device with x2 width. And to access the second device the PCH will need to be configured to split the m.2 slot into two x1 lanes.
I have another motherboard with a native x2 E-Key m.2 slot. I will test it there, but I expect the same problem. Maybe with some magic setpci command I can split up the lanes.
Yes, it needs two lanes. My point though it’s that unless the motherboard UEFI specifically splits up the m.2 port into two x1 lanes (which I don’t think any motherboard does this) only one device will be detected.
The Dual TPU also does not have a metal heat sink and needs one to be attached. The Single TPU has a metal cover that acts as the heat sink.
I wouldn’t have thought you would need to split them in the BIOS like you do for the main PCIE x16.
Are you sure the adapter wires all the PCIE lanes to the connector?
I have bought a number of M2 to PCIE x4 and often then are only wired for x1 or x2, so had to return them as not suited for what I want.
The main x16 CPU port can be bifurcated (if your UEFI supports that). The m.2 are connected to the PCH which requires something different.
I was looking at some PCH datasheet and it states “HSIO lane configuration and type is statically selected by soft straps, which are managed through the Flash Image Tool, available as part of Intel® CSME FW releases.”
I followed the traces on my adapter and both lanes are connected. The other giveaway is that the Dual TPU is shown in lspci as a native x1 port. So that means the second TPU is connected to the second x1 lane only, which means that PCH reconfiguration is needed to expose a second PCIe device. Even with reconfiguration maybe it’s not possible because it’s connected to the wrong HSIO pins.
They are reolink cams, but they’re the B800’s so they don’t have native RTSP/RTMP support. I’m using the RTSP stream that’s output from the NVR and it doesn’t have RTMP.
The camera’s work good now, and I am getting person detection just fine. Just getting the error on request in the logs now. I wonder if it has to do with the saving of the images? My clips and cache dir are both empty.
Error on request:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 323, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 314, in execute
for data in application_iter:
File "/usr/local/lib/python3.8/dist-packages/werkzeug/wsgi.py", line 506, in __next__
return self._next()
File "/usr/local/lib/python3.8/dist-packages/werkzeug/wrappers/base_response.py", line 45, in _iter_encoded
for item in iterable:
File "/opt/frigate/detect_objects.py", line 464, in imagestream
frame = object_processor.get_current_frame(camera_name, draw=True)
File "/opt/frigate/frigate/object_processing.py", line 357, in get_current_frame
return self.camera_states[camera].get_current_frame(draw)
File "/opt/frigate/frigate/object_processing.py", line 73, in get_current_frame
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_YUV2BGR_I420)
cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-a98tlsvg/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<1>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::FROM_YUV; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
> 'VScn::contains(scn)'
> where
> 'scn' is 3
Error on request:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 323, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 315, in execute
write(data)
File "/usr/local/lib/python3.8/dist-packages/werkzeug/serving.py", line 296, in write
self.wfile.write(data)
File "/usr/lib/python3.8/socketserver.py", line 799, in write
self._sock.sendall(b)
OSError: [Errno 113] No route to host
The key message in the logs is “Invalid number of channels in input image”. OpenCV is erroring when trying to convert the video frames from YUV to BGR so it can return the mjpeg feed. Someone else had the same issue on GH, but I’m not sure what is causing it.