Hiya, can double take recognise 2 faces at once?
Yes it can.
Oooohhhhh…thanks, can you please point me to the right direction? links to a config example perhaps of someone’s?
You don’t need to do anything special. If you train multiple faces and the detected image contains more than one recognized face, then the matches page will show both in the result.
I’ve recently published version 1.12.0
with includes support for AWS Rekognition and a few other things. I’m still really happy with CompreFace, but it’s been cool to run it alongside a paid service to see how the two compare.
1.12.0 (2022-06-13)
Features
- api: opencv preprocess face check (ed30ad1)
- aws rekognition support (7904852)
- detectors: process images from specific cameras (5d39d0c)
- frigate: sort sub labels alphabetically #217 (82d8736)
- frigate: stop_on_match config option to break process loop (4b98990)
- opencv: adjust classifier settings via config (2e6c512)
- ui: show config errors (ddcaf89)
- ui: upload images to process with detectors (f774406)
Bug Fixes
I’m running 1.12.0-88023ca in Docker on Ubuntu. I also have HomeAssistant, Frigate, and Deepstack as the detector, using CPU for image processing; still can’t find a decent price on a used Coral AI USB. I’ve been building on to my existing Frigate and Home Assistant system on a NUC8 by adding Deepstack, Deepstack_UI, and Double Take over the last two or three days, slowly progressing through the steps to hopefully understand what it was I was installing, how it worked. I see you released 1.12.0 in the last 11 hours, so I likely started out with the version prior to this.
At one point, I could upload a file in the Double Take Matches page, and the image would show up eventually. I could then select one, and train. Now, it is not loading the image to the Matches page. In the Docker logs for Double Take, I see this every time I upload an image.
info: processing manual: 4d0e4531-31e8-4cee-be4d-d50221c20898
error: opencv error: 187304856
info: done processing manual: 4d0e4531-31e8-4cee-be4d-d50221c20898 in 0.73 sec
info: {
id: '4d0e4531-31e8-4cee-be4d-d50221c20898',
duration: 0.73,
timestamp: '2022-06-15T01:41:53.310Z',
attempts: 1,
camera: 'manual',
zones: [],
matches: [],
misses: []
}
It seems to be working from my one camera on Frigate; images from the camera show up on Matches, and I can select some and train them to a folder name just fine. But if I try to manually load, it’s a no go.
Hey @tlundste thanks for reporting the issue you are running into. So the opencv error
only occurs on the manually uploaded image? I’ll add in some better logic to still process the image even if opencv fails. Is there anything different about the manual image you are uploading? If you want to PM it to me or send it over to me privately on Discord I can use it to help trouble shoot.
Did you set a value for opencv_face_required
at all? It shouldn’t run the image through opencv unless you set that value to true or you’re using rekognition.
Yep, my opencv_face_required
was set to true. I changed it to false and it started working again. I assumed I would want it to find a face (or faces) before it tried to recognize who it was. Thanks for pointing that out!
From what I’ve been testing with so far for match-then-train process, it’s just a bunch of high-res images from my DSLR that have one person in the image (and some single-person images from my doorbell camera snapshots via Home Assistant).
For testing with just matching, I am using pictures from the same DSLR that have at least one of the trained faces in the image to see if it finds them, and labels the other faces as unknown. I’m having mixed results with those tests… sometimes, it’s calling out a spot in the image where there is no face in it as a face (box) and either attributing it to unknown or one of the three trained people I have.
What is the recommended way to match and train? Should I be manually cropping a bunch of images down to just the face of the person I want to train? If so, is there a recommended size/resolution I should be using? And how many images would I need to start getting reliable results?
Glad we pin pointed the issue. Ya my idea with opencv was to hopefully preprocess the image before sending to the detector, but if you’re running everything locally, spamming the detector with images prob isn’t bad either. Opencv was orginally for rekognition since that’s a paid service.
How large are the images you were manually uploading? Wonder if it was just choking out due to the size of the image.
Most of the detectors will crop and do what they do for the trained image. So you don’t need to crop the faces yourself. In terms of the best size, it’s hard to say and I would say varies from detector. I recall reading somewhere CompreFace recommended the face be around 300x300 pixels.
I’ve trained around 20 images for myself, a lot of them being selfies from my phone, and those seem to produce the best results.
I’m uploading full-res, uncropped shots taken from my DSLR camera for the manual ones. Images range from 4-10 MB. Using those, deepstack runs about 150ms up to 1.42s. “v1/vision/vace/recognize” time is, at a glance & guess, averaging around the 500ms mark.
I can see .jpg files in the /matches folder, but they are from several days back, and there are no images on the Matches page of Double Take.
I can see full size images in the /train/person-name/ folders.
Should I assume it’s OK for the files in /matches to be manually deleted if there are no images being displayed on the Matches page of Double Talk? There are a few there probably left over from a hard crash of my system.
Should I assume the images in the /train/person-name folder must always remain for facial recognition to work?
I’ve just got around to installing the new Frigate beta and starting to play with the update_sub_labels feature in Double Take. Does anyone know what the conditions are for the sub label to be set?
If it’s only being updated if confidence is x%, is that configurable?
Anytime the result has a name that isn’t “unknown”, then the Frigate sub label would be updated. You can adjust the unknown threshold on the Double Take side to your liking.
Thanks Jako, I’ve added a feature request to git for this to have its own config option, details why in the request
Thank you Eoin. Sorry for the long delay in this response but your suggestion worked.
how to you fix that?
or right now is still not working?
Hey Jako,
Would you consider adding ‘CodeProject.AI Server’ as a supported detector for facial recognition? This is now the preferred for Blue Iris.
Hello,
i wanted to ask if there is maybe a possibility to use double-take with unifi protect?
i saw that someone managed to make a addon for mqtt and also recently updated it to make use of the smart detections:
maybe you have a other idea to realize this?
maybe this can also be a option maybe?
this addon can do so much - it also register if a car or person is registerd and can do something when this happens - so maybe also there is a possibility?
here is the documentation of the api that the developer of homebridge wrote:
i just saw your other post regarding unifi and the possibility to integrate it, so maybe you are interestet or know something that could work?
best regards
just ordered a piece this morning…but I see now it is already out of stock
you’d better subscribe for notifications when it would be back in stock
Just wanted to come here and say I have finally gotten around to replacing Blue Iris with Frigate + CompreFace + DoubleTake and this is some amazing software. Thanks a lot for your effort on this!
p.s. I am running all of this without any h/w acceleration on 4 cameras, and it’s working very smoothly, regardless of all the warnings to not do so Especially with current Coral pricing on Ebay this is much preferred for me.
HI, did you ever figure it out?