I’d switch to compreface or rekognition (cheap)
I switched to compreface. Will see how this will go.
Similar to @ctml, CompreFace has always been the best for me. Rekognition produced comparable results if you don’t mind the costs associated.
I have around 15-20 images for each person I want to recognize. I use my phone camera to take selfies of my friends / family when possible or try to find pictures where there is a clear view of their face. Those pictures produce the best results. Try to avoid using the images from your Frigate or RTSP cameras for training as those may produce more false positives.
Hi, i got an issue where rekognition is triggered when the image has a face that is very small. is there way to stop the aws call if the box is way too small?
Hey @amz4u2nv, are you using Frigate to trigger the detections? There is a min_area
value you can use so images below a certain value aren’t passed to the detectors.
frigate:
# ignore detected areas so small that face recognition would be difficult
# quadrupling the min_area of the detector is a good start
# does not apply to MQTT events
min_area: 0
Any help, if not, why not?
How’d you create them, did you use Docker run or compose? You need to update the tag, if you used docker compose then you just need to update the tag in the yaml and down/up.
If you used docker run, you can either recreate them if you have the run command saved or use a tool like watchtower to automatically update the images and recreate the containers (which it will do so based on the original run CMD args automatically).
Thx will try and report back
@sender if you are having specific CompreFace upgrade questions I would refer to their release page where they have installation instructions. Updating to their current version should be as simple as updating your .env file and docker compose file.
The current release should have a .zip file you can download with the updated env and compose files.
Feel free to message me on Discord if you run into issues and I can help.
Hey Jako, i got this set to 400 - so that would mean the box area needs to be more than 400 for detecters to trigger, right?
frigate:
url: http://192.168.0.109:5433
# if double take should send matches back to frigate as a sub label
# NOTE: requires frigate 0.11.0+
update_sub_labels: false
# stop the processing loop if a match is found
# if set to false all image attempts will be processed before determining the best match
stop_on_match: true
# ignore detected areas so small that face recognition would be difficult
# quadrupling the min_area of the detector is a good start
# does not apply to MQTT events
min_area: 400
# object labels that are allowed for facial recognition
labels:
- person
attempts:
# number of times double take will request a frigate latest.jpg for facial recognition
latest: 10
# number of times double take will request a frigate snapshot.jpg for facial recognition
snapshot: 10
# process frigate images from frigate/+/person/snapshot topics
mqtt: true
# add a delay expressed in seconds between each detection loop
delay: 0
image:
# height of frigate image passed for facial recognition
height: 500
Hi ctml, I do no know anymore how I created them (dumb).
I did the watchtower one. But on 1 container (doubletake) was updated, the others not:
Any clue on how to get them all updated?
Think I did use “tag 0.6.1” at that time… Any idea how to change to the latest tag?
You can use this to extract the run command of running containers, and then change the tag to latest (or whatever you prefer). Before running the generated command, you’d need to stop and remove your old container (ie. docker stop , docker rm ) and then use the generated command to recreate the container. Your data should all be stored on docker volumes, so it should be retained so you do not lose configuration. Be careful not to remove the containers before you’ve extracted the run command for each one because you need the container to be running in order to get it’s run command.
This is so cool!!!
Yes, that’s correct. Frigate sends over the area of the object so that value can be checked. Here’s what the MQTT payload looks like - MQTT | Frigate
Keep in mind this triggers Double Take then to pull from the Frigate cameras, which means the initial Frigate object might be above the area you want, but subsequent images may not.
When the
frigate/events
topic is updated, Double Take begins to process thesnapshot.jpg
andlatest.jpg
images from Frigate’s API. These images are passed from Double Take to the configured detector(s) until a match is found that meets the configured requirements. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found.
I would recommend enabling opencv_face_required
to preprocess the image to help reduce passing an image to Rekognition that doesn’t have a face.
If you have any other ideas on how to better check the image before passing it to Rekognition let me know!
Thanks Jako, got another question
Im using the recognise api call, passing in the reolink snapshot url. However the reolink snapshot url is https, and cert is self signed. Any idea where in the code i would need to change to get this to work - i was trying the following
httpsAgent: new https.Agent({
rejectUnauthorized: false
})
But wasn’t exactly sure where to put it.
An absolutely amazing product, thanks so much for creating & sharing this. Frigate & Compreface is working well for me. I wanted to share some of my experience (the least I can do to help I guess).
In the inside of the house things work quite well. The top challenge on it having a meaningful use in automation for outdoor stuff has been false positives. I’ve increased the quality of snapshots from Frigate (to 500 px and 100% jpg quality) but the issue still persists. The right people are usually tagged rightly, and that’s why indoor cameras are useful. It’s just that when there are “unknown faces” compreface seems to be making up random assignments (I rarely get unknown tags, like <1%). I’ll follow the previous recommendation on this thread but wanted to pass the feedback in case there is something that can be done within double take.
I thought of making “untrain” available from the matches tab (the way it can be done from the train tab) but I don’t know if this will help, or if this is the way the ML model works in the first place.
Last, another suggestion for the “untrain” feature is that the button is not available if you do not select a person from the dropdown. However, if I select an image the untrain option could still be available.
These are all luxury stuff that might make things even better for what I’d call an amazing product.
If you have good quality photos on the match page for a user or incorrect user you can select those and train it towards the user you want. Try to use images where the face is clear and large. I originally trained using a lot of images where the faces from my cameras were blurry or hard to see and that lead to more false positives.
I’ve also increased my
confidence.match
value to90
to try to decrease the amount of false positives I get.
In case others are suffering from false positives: things materially improved after I moved to the custom build “SubCenter-ArcFace-r100” in CompreFace.
A bit slower but materially better. Many more accurate “unknowns” and when there are false positives they are lower confidence (<80%).
Jako, thanks for this great code. Works really well with aws rekognition api. I’m using the manually api to send my own reolink snapshots via bash script. Just for others what it does -
The bash script is called whenever the front sensor is activated.
It then loops around 6 times (random number) for now and generates a snapshot from the front camera. A bash function is called in background mode so that it doesn’t wait for all 6 snapshots to be generated so it will start the call as soon as it gets the first snapshot. If a match is found it kills all the background function calls made for any snapshots it’s running.
In node red I also use the delay node to make sure I only get one notification every 10secs just in case anything slips through.
So for the moment it’s working really well.
If there is a better way of doing this, would be great to know.
Thanks
Hi all,
I’m quite new to docker, portainer, etc. After some research I have been able to sucessfully install frigate (get it up and running) and double take. I have installed frigate and double take using the docker compose function Portainer stacks.
However for the life of me, I cannot seem to install CompreFace using Portainer stacks.
I have downloaded the latest release from Releases · exadel-inc/CompreFace · GitHub and used the upload function with the docker-compose.yml and .env file. I have also directly copied the text from docker-compose.yml to web editor of Portainer create stack and used the downloaded .env file. Both ways I get the error as below.
Am I doing something wrong? Should I be modifying something in the docker compose script or the environmental variables that are taken from the .env file?
Any help would be greatly appreciated!
What are you running this on?