Hi this project was recommended to me by @robmarkcole however I’m having problems implementing the docker image I keep getting the error:
Traceback (most recent call last):
File "detect_objects.py", line 25, in <module>
with open('/config/config.yml') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/config/config.yml'
New to doctors so some help would be appreciated thank you
web_port: 5000
mqtt:
host: mqtt.server.com
topic_prefix: frigate
# client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
# user: username # Optional
#################
## Environment variables that begin with 'FRIGATE_' may be referenced in {}.
## password: '{FRIGATE_MQTT_PASSWORD}'
#################
# password: password # Optional
#################
# Default ffmpeg args. Optional and can be overwritten per camera.
# Should work with most RTSP cameras that send h264 video
# Built from the properties below with:
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
#################
# ffmpeg:
# global_args:
# - -hide_banner
# - -loglevel
# - panic
# hwaccel_args: []
# input_args:
# - -avoid_negative_ts
# - make_zero
# - -fflags
# - nobuffer
# - -flags
# - low_delay
# - -strict
# - experimental
# - -fflags
# - +genpts+discardcorrupt
# - -vsync
# - drop
# - -rtsp_transport
# - tcp
# - -stimeout
# - '5000000'
# - -use_wallclock_as_timestamps
# - '1'
# output_args:
# - -f
# - rawvideo
# - -pix_fmt
# - rgb24
####################
# Global object configuration. Applies to all cameras
# unless overridden at the camera levels.
# Keys must be valid labels. By default, the model uses coco (https://dl.google.com/coral/canned_models/coco_labels.txt).
# All labels from the model are reported over MQTT. These values are used to filter out false positives.
# min_area (optional): minimum width*height of the bounding box for the detected person
# max_area (optional): maximum width*height of the bounding box for the detected person
# threshold (optional): The minimum decimal percentage (50% hit = 0.5) for the confidence from tensorflow
####################
objects:
track:
- person
- car
- truck
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
cameras:
back:
ffmpeg:
################
# Source passed to ffmpeg after the -i parameter. Supports anything compatible with OpenCV and FFmpeg.
# Environment variables that begin with 'FRIGATE_' may be referenced in {}
################
input: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
#################
# These values will override default values for just this camera
#################
# global_args: []
# hwaccel_args: []
# input_args: []
# output_args: []
################
## Optionally specify the resolution of the video feed. Frigate will try to auto detect if not specified
################
# height: 1280
# width: 720
################
## Optional mask. Must be the same aspect ratio as your video feed.
##
## The mask works by looking at the bottom center of the bounding box for the detected
## person in the image. If that pixel in the mask is a black pixel, it ignores it as a
## false positive. In my mask, the grass and driveway visible from my backdoor camera
## are white. The garage doors, sky, and trees (anywhere it would be impossible for a
## person to stand) are black.
##
## Masked areas are also ignored for motion detection.
################
# mask: back-mask.bmp
################
# Allows you to limit the framerate within frigate for cameras that do not support
# custom framerates. A value of 1 tells frigate to look at every frame, 2 every 2nd frame,
# 3 every 3rd frame, etc.
################
take_frame: 1
################
# Configuration for the snapshots in the debug view and mqtt
################
snapshots:
show_timestamp: True
################
# Camera level object config. This config is merged with the global config above.
################
objects:
track:
- person
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
Thanks, had to manually move config.YML file to /opt/frigate/config.yaml directory but now I have this error
os@os:/opt$ sudo sudo docker run --rm --privileged --shm-size=1024m -v gpus=all -v/opt/frigate:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5001:5001 blakeblackshear/frigate:stable
Traceback (most recent call last):
File "detect_objects.py", line 361, in <module>
main()
File "detect_objects.py", line 164, in main
client.connect(MQTT_HOST, MQTT_PORT, 60)
File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 937, in connect
return self.reconnect()
File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 1071, in reconnect
sock = self._create_socket_connection()
File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 3522, in _create_socket_connection
return socket.create_connection(addr, source_address=source, timeout=self._keepalive)
File "/usr/lib/python3.7/socket.py", line 707, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Question: best.jpg works very nice, but due to the high resolution its a bit heavy to send. It takes a long time to upload. I tried to add parameters to the url, but that doesn’t work. Is there a lower resolution still available? If not I’ll try to resize before sending but that takes time as well
Hey @blakeblackshear, I have a camera on my driveway that I’d like to detect cars and people on. I’d like person detection to happen across the entire frame, but I’d like car detection to be masked (to avoid where my car is parked). Right now I have two separate camera entries in my frigate config for the driveway camera, one only for persons and the other only for cars, with a mask. This is working fine for me, but how difficult would it be to implement a per-object mask? I can open a Github issue for tracking if you have this in mind for the future. Or is there another way you would recommend achieving what I’m looking for?
just got this setup yesterday now that my Coral TPU came in. seems to be working well on 0.5.1 had a couple times where 1 cam seemed to lockup in frigate. however, that one is connected to a wireless media bridge because i dont have full ethernet to that side of the house, so there is a little more latency. i just upped the timeout to see if that corrects the issue.
so far i’m impressed by the (lackof) CPU consumption. i’ve processing 2 720p feeds and one 2K 180degree feed
does frigate use the coral TPU by default? i don’t see anything in the config examples for coral. i’m assuming so from the example docker config passing the USB bus through, and i’m not seeing my CPU spike too much (5-10% more than before frigate was running).
running in docker on an i5 w/ 27GB RAM and the coral connected via USB. i bumbed up the shm_size to 2GB as i saw i was using around 700MB before i made the adjustments and i have plenty to spare.
and if it’s already leveraging coral, does it make sense to enable quicksync acceleration as well? or does frigate do only one or the other?
I see. I noticed after allocating more memory, the system is being more stable and taking advantage of it. Also noticed the cpu spike with a lot of activity out front here at the end of the day. Will enable quick sync and grab some snapshots of before and after.
i think i would need to pass the /dev/dri/renderD128 through docker right? like below?
I am not familiar with hardware acceleration/quick sync. Everything is working without enabling it. When I tried on one of my 2 cameras, it didn’t work. It kept restarting and process for 2nd camera. Perhaps I didn’t use proper syntax or arguments. Can any one help. I believe my cpu supports the quick sync.
this is undoubtedly the best and most complete recognition project that I know, it is seen that there are many hours of work behind, thank you very much for your dedication and knowledge, for me it has become something essential
Here here! These days it’s almost like an appliance. With a bit better false positive rejection or better model and ability to have multiple masks and object per cameras, I’ll be able to just install it and leave it.