Has anyone had any issues with power or heat and the Google Coral USB on RPi4? I’ve got my frigate/hassos instance running with 6 cameras in an actively cooled Argon M.2 Case. I went with a WD Blue M.2 SSD and booting HassOS on it as well as storing clips. It was running fine for a few days then the SSD decided to crap out when Frigate launched. More context here and here
At the moment I think it’s related to power and heat. I moved the Coral USB to the GPIO cover where the fan exhaust is a few days ago. Running at 70-90% CPU constantly and being overclocked is generating some heat! Looking at the Coral datasheet it suggests it doesn’t like ambient temps above 25c! So even though my Coral was relatively cool to the touch, a bit of heat made the current usage jump or it to cause errors on the USB3 bus maybe.
I’d be interested to know at which speed the TPU is operating on in Frigate as according to the datasheet this can affect power consumption:
When you first set up the USB Accelerator, you can select whether the device operates at the maximum clock frequency or
the reduced clock frequency. The maximum frequency is twice the reduced setting, which increases the inferencing speed
but also increases power consumption.
To change the clock frequency at which the device operates, simply install the alternative runtime, as described in the
instructions for how to install the Edge TPU runtime.
Any particular reason you are running with threshold, contour_area, delta_alpha, etc…? Have you tried just removing all of those and running with the default settings? Its also hard to tell since you aren’t showing where the motion and object mask are located on the screen.
Stupid question. Is that a normal behaviour that when i open the cam via frigate component, the stream is fluent but if i use the created entity camera it stutters at 5 frames /s?
People aren’t ignoring you intentionally so much they just may not know the answer and hope someone else can help. It’s not an error I’m familiar with, and reading online, a Fatal Python error: Fatal Python error: Bus error sounds like something that may be specific to your hardware; disk/swap space is low/out, memory is low, kernel issue/bug etc. Coming from personal experience with some issues I initially had with Frigate, they weren’t Frigate related but either my own hardware issues, or problems with network communication (in my specific case) to the cameras, which makes it harder for others to solve the problem.
To elaborate on my issue, I’m now thinking heat isn’t the cause but power. The RPi can only deliver a max of 1.5A over all the USB and the SSD has an average peak of 667mA. The Google Coral at Max speed has a peak of 900mA. I’d still like to know which runtime is being used libedgetpu1-std or libedgetpu1-max and if this can be changed in HassOS. It doesn’t mention how much peak is under std but I would have thought saving more than the 167mA I’m over…
Is mjpeg faster and less CPU hogging than h264?
Would it be better to use mjpeg for Frigate?
Thinking about using Zoneminders mjpeg streams as I already using ZM for the recording.
It is anyhow possible that Frigate is saving clips when detecting movement (above adjusted trigger) and then try to make object detection. But movement clip has been saved anyway…
second… I’d like to see even bounding box and/or motion box saved also to the clip (as an option).
I’ve found a Kingston SSD which has a whole watt less of peak power so it should bring me under the max load. I want to avoid hubs and anything hanging off the case as much as possible. Amazingly, I’ve managed to fit the Coral Inside the Argon M.2 case, I’m just waiting for a cable to connect it up. Will look nice and tidy as long as the Kingston SSD delivers!
frigate.video INFO : tuinhuis: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : tuinhuis: ffmpeg process is not running. exiting capture thread...
@tarbax Your stream URL is incorrect for RTMP. Also, if you have the RLC-520, which I do too, the resolution is incorrect. Go into the app and check which resolution you’re using. I have the ‘sub’ stream for detect which is 640x480 but the main one is 2048x1536 which I use for clips.
The RTMP url should be:
Sub stream:
rtmp://ip.address/bcs/channel0_sub.bcs?channel=0&stream=0&user=user&password=pwd
Main stream:
rtmp://ip.address/bcs/channel0_main.bcs?channel=0&stream=0&user=user&password=pwd
Yes that’s the RTSP link. Your input args are only for the RTMP stream from the camera. RTSP doesn’t require them and it worked for me but I had lots of artefacts and smearing so I switched to RTMP. The link should work if you change the IP and credentials correctly.
What setting(s) or configuration defines the FPS for the record video for events? Is it based on the detection FPS, limited by hardware, etc? Reason I ask, I have two cameras I am using the sub stream at 5 FPS 704x480. Then I record the events with the main video, H.264, 3840x2160 and 15fps. What I am seeing is the video recorded off the main video for events is jerky/sputters and no where near 15fps. I did the ffmpeg optimizations for the Pi from the docs, it’s only 2 cameras, using a Google Coral TPU for detection, and it’s on a PI4 8gb. It’s running in my microk8s cluster, and the cluster storage (pv) is backed via NFS to a synology with ssd caching (so not the SD card.) Frigate is pinned to this node via affinity, and nothing else running on it. Am I running into limits with the Pi4 CPU, a setting that controls this I missed, etc? I am not seeing any errors coming from frigate, it’s logs are very clean.
Did you try to play back the stream from the cameras in VLC? Are they WiFi or ethernet? I’m recording 2048x1536 15fps ok on a pi4. Don’t use HW acceleration on the pi either.