The libraries for detection only work on the Coral. At the moment, it is using the CPU for decoding the video stream. The next version will support hardware accelerated decoding for platforms that support it. That should reduce CPU usage a good bit as well.
Awesome! Fantastic work, really . Now I just need to add my other cameras haha
Sorry, I spoke to soon, the container seems to be still going offline?
What does the log say?
I have debug set up in my docker-compose, but when I run docker-compose up
it tells me that debug is off? And nothing is being written to /lab/debug
here is the compose file:
version: "3"
services:
camera_1:
container_name: camera_1
restart: unless-stopped
privileged: true
image: frigate:latest
volumes:
- /dev/bus/usb:/dev/bus/usb
- /home/tc23/frigate/config:/config:ro
- /lab/debug:/lab/debug:rw
ports:
- "5000:5000"
environment:
RTSP_PASSWORD: XXXXX
DEBUG: "1"
I tried without the quotes, but then i get an error exit code
Seems after a docker-compose restart
the container stops going offline, however now I am trying to add a second camera to the mix and I get the following error:
camera_1 | ERROR: Failed to retrieve TPU context.
camera_1 | ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.
camera_1 |
camera_1 | Failed in Tensor allocation, status_code: 1
My user is a member of the plugdev
group, which is the recommended solution for that error according to the Coral docs. When I go back to single camera, I still get that error, but unplugging and plugging the Coral back in seems to solve it for single camera. For multiple cameras, this solution hasnāt worked. Have you run into this at all?
I havenāt seen that error. Make sure you are only running one container. The Coral only supports a single process using it at a time.
Iām pretty sure Iām only running one container (but Iām a novice with docker). It is probably something I did wrong my docker-compose or config, although I followed prior examples. Here is the docker-compose:
version: "3"
services:
camera_1:
container_name: camera_1
restart: unless-stopped
privileged: true
image: frigate:latest
volumes:
- /dev/bus/usb:/dev/bus/usb
- /home/tc23/frigate/config:/config:ro
- /lab/debug:/lab/debug:rw
ports:
- "5000:5000"
environment:
RTSP_PASSWORD: xxxx
DEBUG: "1"
camera_2:
container_name: camera_2
restart: unless-stopped
privileged: true
image: frigate:latest
volumes:
- /dev/bus/usb:/dev/bus/usb
- /home/tc23/frigate/config:/config:ro
- /lab/debug:/lab/debug:rw
ports:
- "5001:5000"
environment:
RTSP_PASSWORD1: xxxx
DEBUG: "1"
and then here is the config:
web_port: 5000
mqtt:
host: 192.xxx.xx.xx
topic_prefix: frigate
cameras:
back:
rtsp:
user: xxxx
host: 192.xxx.xx.xx
port: 8554
password: $RTSP_PASSWORD
path: /xxxx
mask: back-mask.bmp
regions:
- size: 350
x_offset: 0
y_offset: 300
min_person_area: 5000
threshold: 0.4
porch:
rtsp:
user: xxxx
host: 192.xxx.xx.xx
port: 7447
password: $RTSP_PASSWORD1
path: /xxxx
mask: back-mask.bmp
regions:
- size: 300
x_offset: 0
y_offset: 100
min_person_area: 5000
threshold: 0.5
Anything glaringly wrong? Again, I tried to go back through the thread and use prior examples, but I probably goofed something up
Yep. You have 2 containers defined in your compose file. That was probably based on an old version before the Coral was implemented. Just remove camera_2 from compose, and make sure that container is removed, as removing it from compose wonāt necessarily clean it up. You will access the cameras at localhost:5000/back and localhost:5000/porch. Also, you probably donāt want to use my mask file.
Iām such a dunce. Iāll update the docker-compose and try again. Sorry for another stupid question, but what does the mask file do and how would I go about creating a personalized one vs the included file?
I havenāt added much info about the mask to the readme yet, but some info is available here in this comment: https://github.com/blakeblackshear/frigate/issues/30#issuecomment-490718844
This is a fantastic project. As commented on earlier in this thread, you really need a link to let us buy you a drink.
Is there anyway to limit the frame rate that python is analyzing? My WYZE cameras do not have a way to change the frame rate in the RSTP stream. This means frigate processes every single frame (multiplied by my 6 regions across 2 cameras) and destroys my CPU/maxes out the Coral buffer queue.
I noticed in the README you mention a tip about limiting the frame rate to lower the CPU usage, but I am unable to do that in my setup without pre-processing the video first.
Can you open a github issue? I have implemented that already in a local version, and that will remind me to include it in the next release.
Are you on the dafang hack or their recently released version of RSTP? I only ask because I am having the same problem with my dafanged wyze (unifi works perfectly), and have tried adjusting the FPS, Resolution and Bitrate. But in the end I either get queue full. moving on
or unable to grab frame
. And then it terminates the capture process and restarts. Are these the same errors you are seeing as well?
I am not. I am using the Wyze beta2 build that gives it official RTSP support. But it seems to be streaming at 1080p with either 15 or 30 fps. (I think 15 is what Iāve heard?).
I did however get those messages because there were too many frames for the number of regions I was trying to process at the same time on the TPU. Limiting the framerate or number of regions will solve it in my case.
Interesting. Yeah, I dropped the frame rate down to 5fps and resolution at 1600X900 with a single region for my Wyze and I am still getting those errors
Issues submitted. Let me know if I can help with it.
Thatās odd. Any other errors show in the logs?
I could never get debug to turn on, so not sure what the logs are showing. I have tried it at lower resolutions as well, but same errors occur. I tried 1280X720 and 960X540., both at 5 FPS. Not sure I can dip lower than 5 FPS
Is it plugged into a USB 3.X port?