There is a fork of double-face that is actively updated by @skrashevich
Can this be installed as ha addon?
Yes, that is how my double-take is installed, as HA add-on.
Yeah I see that what I’m using too. Still trying to figure it all out. I don’t get sensor entities for people faces, only unknown. Every time a new image appears in the match tab its red, box too low and I don’t know what to tweak. Sometimes the best images have square around the whole body. Only tried deepstack, maybe the others will work better
I found that compreface works al lot better then deepstack, you can find in in the add-on section
Yep, I moved to that the same day. Still not as reliable as I’d like but at least I have sensors and mathches now. Thanks.
Do you have a link for CompareFace HA addon?
For me it is not reliable at all. No matter of I try aiserver or compreface. All give horrible results. Sometimes leaves get recognized as a known person.
Did you do any fine tuning?
i gave up. not worth it
Hello!
This is an excellent plug-in and thank you for such an amazing contribution!
I do have a question, I’m passing pictures to the API endpoint, using compreface.
I’m doing so as my camera has person detection so no need for frigate.
However compreface doesn’t detect the face in the image but does do when I crop the image and zoom in.
Is there a setting to change to detect a face in a 8mp image?
Wonder if the face is too small?
Thanks!
I am sending snapshot URL (5MP) to /api/recogize endpoint. Compreface is detecting faces correctly.
Make sure to increase your min_area and width. Here are my settings
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 99
# hours to keep match images until they are deleted
purge: 500 #orig 168
# minimum area in pixels to consider a result a match
min_area: 10000 #if using substream image, set to 3600
width: 1920
unknown:
# save unknown images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed before classifying a name as unknown
confidence: 98.9
# hours to keep unknown images until they are deleted
purge: 500 #orig 8
# minimum area in pixels to keep an unknown result
min_area: 12000 #12000
width: 1920
If I’m passing an image to the api manually, can I change the config to resize the image to improve detection?
I’m trying to read the guide but it’s not clear on this?
Thanks for your help
Did you figure this out?
Edit: I think it highlights green when it automatically matches
, and red when it misses
. Manual training doesn’t override this.
how are you using this? any chance you are using scrypted or unifi protect for the direct snapshot? if so, can you give me details?
I’m not using scrypted or unify protect. I use Reolink cameras and a NVR although it would work without NVR and with any IP camera that has an API with snapshot support.
Here is how I did it:
- when my room precense sensor creates an alarm then I curl the camera to take a snapshot
- The snapshot is stored in a dedicated folder
- I use HA folder_watcher that detects when new files are created in that folder
- When a new files is created I use curl again to upload that file to double take
- If double take recognizes a face it writes the current time stamp into a sensor for that face. This can be used as a trigger in automations, for example to send notifications that person x was recognized.
Hope this helps
Sorry if this is noob question. But if we have Frigate triggered and sent images to Compreface for processing, why we still need Snapshoot and Latest?
My question from Aug. 23 was:
Yes but you would have to create the integration your self. There is the api you can talk to.
Read the conversation and you will understand that this has benn solved long time ago.
And is this Is solved, what is the reason you are bringing this up?