Codeproject.ai frigate detectors - face recognition

Hi,
As you all probably know there is a new version of frigate addon aka container that supports new detectors. Documentation can be found here.
Looking around the forum coudn’t find anything useful for me on how to set this things up. After some time of trial and error and error a lot more I finally managed to set this things up at least from containers and frigate side. No errors anywhere for a few hours and it seems to me that everything is working.

I have tpu google coral and nvidia Quardo P-1000 GPU. I’m running docker compose.

This is my docker-compose.yml setup for frigate

  frigate:
    container_name: frigate
    privileged: true 
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
    runtime: nvidia
    shm_size: "128mb" 
    devices:
      - /dev/bus/usb:/dev/bus/usb 
      - /dev/nvidia0
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /opt/frigate/config:/config/:rw
      - /opt/frigate/storage:/media/frigate
      - /opt/go2rtc:/config
      - /opt/frigate/config/model_cache/tensorrt:/config/model_cache/tensorrt
      - type: tmpfs 
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    depends_on:
      - homeassistant
    ports:
      - "5000:5000"
      - "1935:1935" 
    environment:
      TZ: Europe/Zagreb
      FRIGATE_RTSP_PASSWORD: "your_password"
      NVIDIA_VISIBLE_DEVICES: all
      NVIDIA_DRIVER_CAPABILITIES: all
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

And container setup for codeproject.ai

 codeproject.ai:
    container_name: codeproject.ai
    image: codeproject/ai-server:cuda11_7
    ports:
      - 32168:32168/tcp
      - 32168:32168/udp
    volumes:
      - /opt/codeproject.ai/data:/etc/codeproject/ai
      - /opt/codeproject.ai/modules:/app/modules
    environment:
      - TZ=Europe/Zagreb
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    restart: unless-stopped

Relevant par of frigate config.yml

mqtt:
  enabled: True
  host: 192.168.31.103
  user: mqtt_userename
  password: mqqt_password
  port: 1883
  topic_prefix: frigate
### Google coral tpu
detectors:
  coral:
    type: edgetpu
    device: usb
  deepstack:
    api_url: http://192.168.31.103:32168/v1/vision/detection
    type: deepstack
    api_timeout: 30 # seconds

Cpu usage on frigate little over 11% and codeproject.ai is using little over 2%.
This is how everything looks in frigate

I believe that face recognition in frigate using codeproject.ai will became more and more popular as it give an opportunity for a lot of automations and tweaks of all sorts. So this is some how intro in setting things up just to get it going.

I don’t think this would do face recognition, the frigate codeproject ai detector uses /v1/vision/detection but the api to do face recognition in code project ai is /v1/vision/face/recognize

generally, using double take is the recommended way to integrate face recognition in to frigate

Tnx for point this out. I use double take and it is working. I must say that this matter became very complex. Proper setup of codeproject.ai container is a must for things to work. That was in my case.
And yes still no errors in frigate.
This is part of double-take config

detectors:
  aiserver:
    url: http://192.168.31.103:32168
    # number of seconds before the request times out and is aborted
    timeout: 30
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

it is working but maybe link should point out to api you suggested for face recognition.

Hi i, m in same case of but just no coral for me.

Can you confirm in your config, frigate ande codeproject use the same gpu ?

Yes they are. They are using nvidia gpu.

Ok and the coral for what ?

If frigate use gpu
And codeproject also

What do you mean for what? For cameras. This isn’t very powerful gpu, but it is enough for my needs for now.

I didn’t mean in terms of power.
Try to understand the operation and interactions between different materials.
I wonder what is role. Does not do the same job as the gpu.

I guess the quadro is used as a calculator for codeproject. At least in my case it is. I buy it for.

I would like to know if I should put in addition a coral or if my gpu alone is enough

Do you have install nvdia container toolkit

[

I think nvidia gpu is enough. But I bought coral before nvidia because I found it at affordable price.

I think I did.

Okok
Thank
I try to install that

i try but i thing codeproject recieve nothing.

in log there are no activity

but if i try this
‘’‘’
api_url: http://172.16.128.41:32168/v1/vision/detection
type: deepstack
api_timeout: 30 # seconds
‘’’
codeproject receive request but all time,

23:39:42:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…e0ab29) took 246ms

23:39:42:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…70fc10) took 247ms

23:39:43:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…871ca8) took 247ms

23:39:43:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…894b7e) took 246ms

23:39:43:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…0bb56f) took 247ms

23:39:43:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…c1d5c2) took 249ms

23:39:44:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…e3f6ce) took 246ms

23:39:44:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…e8d593) took 248ms

23:39:44:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…962ecd) took 247ms

23:39:45:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…4ee06f) took 247ms

23:39:45:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…cc9c47) took 246ms

23:39:45:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…882eef) took 246ms

23:39:45:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…2b20ea) took 248ms

23:39:46:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…d552e9) took 246ms

23:39:46:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…5ea94a) took 251ms

23:39:46:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…068011) took 244ms

23:39:46:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…5ce564) took 243ms

23:39:47:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…bf3efd) took 244ms

23:39:47:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…d2caad) took 244ms

23:39:47:Object Detection (YOLOv5 6.2): Rec’d request for Object Detection (YOLOv5 6.2) command ‘detect’ (…a81bb3) took 244ms

ang the gpu are 100%

Did you installed double take?

Probably because you don’t have coral.

Why doubletake.
Codeproject its similar and do the same job no?

Well check its github page. This is middle man between frigate and codeproject.ai
It is basically gui to train codeproject.ai. This is my explanation, doesnt take it literally.
That is how i set things up - frigate + doubletake + codeproject.ai to do a facial recognition.
Downside till now is that i never managed to make it work as i wanted. I have wrong facial detections, maybe more ai training is required.

ok i understand, for your wrong facial i think read on my research that doubletake is not the best.

for object what do you use ?

your code

is the config for doubletake ?

and the detector for frigate What did you put?

this is not accurate, double take simply is an abstraction layer between frigate and multiple different face recognition solutions. Doubletake is not affecting the results here.

Deepstack is definitely not very accurate in my experience, I much prefer compreface

This is my current config for double-take. But I have a tendency to experiment with my settings.

auth: false
# enable mqtt subscribing and publishing (default: shown below)
mqtt:
  host: 192.168.8.40
  port: 1883
  username: !secret mqtt_username
  password: !secret mqtt_password
  client_id: doubletake 

topics:
    # mqtt topic for frigate message subscription
    frigate: frigate/events
    #  mqtt topic for home assistant discovery subscription
    homeassistant: homeassistant
    # mqtt topic where matches are published by name
    matches: doubletake/matches
    # mqtt topic where matches are published by camera name
    cameras: doubletake/cameras
    
# global detect settings (default: shown below)
detect:
  match:
    # save match images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed to consider a result a match
    confidence: 75
    # hours to keep match images until they are deleted
    purge: 60
    # minimum area in pixels to consider a result a match
    min_area: 1000

  unknown:
    # save unknown images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed before classifying a name as unknown
    confidence: 60
    # hours to keep unknown images until they are deleted
    purge: 8
    # minimum area in pixels to keep an unknown result
    min_area: 1000
    
frigate:
  url: http://192.168.8.40:5000
  # if double take should send matches back to frigate as a sub label
  # NOTE: requires frigate 0.11.0+
  update_sub_labels: true
  # stop the processing loop if a match is found
  # if set to false all image attempts will be processed before determining the best match
  stop_on_match: true
  # ignore detected areas so small that face recognition would be difficult
  # quadrupling the min_area of the detector is a good start
  # does not apply to MQTT events
  min_area: 0
  # object labels that are allowed for facial recognition
  labels:
    - person
  attempts:
    # number of times double take will request a frigate latest.jpg for facial recognition
    latest: 10
    # number of times double take will request a frigate snapshot.jpg for facial recognition
    snapshot: 10
    # process frigate images from frigate/+/person/snapshot topics
    mqtt: true
    # add a delay expressed in seconds between each detection loop
    delay: 1.25
  image:
    # height of frigate image passed for facial recognition
    
    height: 1000
#detectors:
detectors:
  aiserver:
    url: http://192.168.8.40:32168
    timeout: 30
    opencv_face_required: false
time:
  # defaults to iso 8601 format with support for token-based formatting
  # https://github.com/moment/luxon/blob/3e9983cd0680fdf7836fcee638d34e3edc682380/docs/formatting.md#table-of-tokens
  format:
  # time zone used in logs
  timezone: UTC
logs:
  level: verbose

and this is relevant part for frigate

### Google coral tpu
detectors:
  coral:
    type: edgetpu
    device: usb
  deepstack:
    api_url: http://192.168.8.40:32168/v1/vision/detection
    type: deepstack
    api_timeout: 0.1 # seconds

Could explain this part, what DeepStack does in Frigate?

It’s in frigate documentation. I took it from there.