Hi,
As you all probably know there is a new version of frigate addon aka container that supports new detectors. Documentation can be found here.
Looking around the forum coudn’t find anything useful for me on how to set this things up. After some time of trial and error and error a lot more I finally managed to set this things up at least from containers and frigate side. No errors anywhere for a few hours and it seems to me that everything is working.
I have tpu google coral and nvidia Quardo P-1000 GPU. I’m running docker compose.
I believe that face recognition in frigate using codeproject.ai will became more and more popular as it give an opportunity for a lot of automations and tweaks of all sorts. So this is some how intro in setting things up just to get it going.
I don’t think this would do face recognition, the frigate codeproject ai detector uses /v1/vision/detection but the api to do face recognition in code project ai is /v1/vision/face/recognize
generally, using double take is the recommended way to integrate face recognition in to frigate
Tnx for point this out. I use double take and it is working. I must say that this matter became very complex. Proper setup of codeproject.ai container is a must for things to work. That was in my case.
And yes still no errors in frigate.
This is part of double-take config
detectors:
aiserver:
url: http://192.168.31.103:32168
# number of seconds before the request times out and is aborted
timeout: 30
# require opencv to find a face before processing with detector
opencv_face_required: false
# only process images from specific cameras, if omitted then all cameras will be processed
# cameras:
# - front-door
# - garage
it is working but maybe link should point out to api you suggested for face recognition.
I didn’t mean in terms of power.
Try to understand the operation and interactions between different materials.
I wonder what is role. Does not do the same job as the gpu.
I guess the quadro is used as a calculator for codeproject. At least in my case it is. I buy it for.
I would like to know if I should put in addition a coral or if my gpu alone is enough
Well check its github page. This is middle man between frigate and codeproject.ai
It is basically gui to train codeproject.ai. This is my explanation, doesnt take it literally.
That is how i set things up - frigate + doubletake + codeproject.ai to do a facial recognition.
Downside till now is that i never managed to make it work as i wanted. I have wrong facial detections, maybe more ai training is required.
this is not accurate, double take simply is an abstraction layer between frigate and multiple different face recognition solutions. Doubletake is not affecting the results here.
Deepstack is definitely not very accurate in my experience, I much prefer compreface
This is my current config for double-take. But I have a tendency to experiment with my settings.
auth: false
# enable mqtt subscribing and publishing (default: shown below)
mqtt:
host: 192.168.8.40
port: 1883
username: !secret mqtt_username
password: !secret mqtt_password
client_id: doubletake
topics:
# mqtt topic for frigate message subscription
frigate: frigate/events
# mqtt topic for home assistant discovery subscription
homeassistant: homeassistant
# mqtt topic where matches are published by name
matches: doubletake/matches
# mqtt topic where matches are published by camera name
cameras: doubletake/cameras
# global detect settings (default: shown below)
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 75
# hours to keep match images until they are deleted
purge: 60
# minimum area in pixels to consider a result a match
min_area: 1000
unknown:
# save unknown images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed before classifying a name as unknown
confidence: 60
# hours to keep unknown images until they are deleted
purge: 8
# minimum area in pixels to keep an unknown result
min_area: 1000
frigate:
url: http://192.168.8.40:5000
# if double take should send matches back to frigate as a sub label
# NOTE: requires frigate 0.11.0+
update_sub_labels: true
# stop the processing loop if a match is found
# if set to false all image attempts will be processed before determining the best match
stop_on_match: true
# ignore detected areas so small that face recognition would be difficult
# quadrupling the min_area of the detector is a good start
# does not apply to MQTT events
min_area: 0
# object labels that are allowed for facial recognition
labels:
- person
attempts:
# number of times double take will request a frigate latest.jpg for facial recognition
latest: 10
# number of times double take will request a frigate snapshot.jpg for facial recognition
snapshot: 10
# process frigate images from frigate/+/person/snapshot topics
mqtt: true
# add a delay expressed in seconds between each detection loop
delay: 1.25
image:
# height of frigate image passed for facial recognition
height: 1000
#detectors:
detectors:
aiserver:
url: http://192.168.8.40:32168
timeout: 30
opencv_face_required: false
time:
# defaults to iso 8601 format with support for token-based formatting
# https://github.com/moment/luxon/blob/3e9983cd0680fdf7836fcee638d34e3edc682380/docs/formatting.md#table-of-tokens
format:
# time zone used in logs
timezone: UTC
logs:
level: verbose
and this is relevant part for frigate
### Google coral tpu
detectors:
coral:
type: edgetpu
device: usb
deepstack:
api_url: http://192.168.8.40:32168/v1/vision/detection
type: deepstack
api_timeout: 0.1 # seconds