Facial recognition & room presence using Double Take & Frigate

I have managed to get DeepStack and Double-Take running on my Jetson Nano
I have managed to train DeepStack on my Face and results show in the Double-Take GUI
But when I select an image to retrain there is nothing in the name box to select. ( No available options)
Shouldn’t my name be in the drop down box so I can train the other images against my name?

I have been able to delete unwanted images from the Interface so I know it’s responsive.

Or am I missing something obvious?

Hey @wills106, glad to see you got it running! I don’t have a way to add those training names from the UI yet. That’s the next thing I’m working on. If you mount the /.storage folder the container uses, you should be able to create one under the /.storage/train folder. Then refresh the UI and you should be good to go.

volumes:
      - ${PWD}/.storage:/.storage

I also need to update the README to include more of this info, but wanted to share the updates before everything was 100% ready.

I have this up and running with frigate, an add-on would definitely make it easier to get the info into HA and usable.

Im not a node-red wizard so I can see myself having issues, but its something Im going to tackle today.

I’d also like to make an unraid template for the docker-hub container, if thats ok? I had issues making it run on unraid, and am currently running it in an ubuntu vm, which is running on unraid, but I would like to consolidate all of my containers, if possible.

I got node red working and the entities being detected by home assistant.

Next step, make an automation to announce who is at the front door.

Is there a way to template out the name value, so I dont need to make a node red flow for every person I train in double-take?

Im using this less for room presence I dont have any cameras indoors., I more so have the goal of doing a TTS to announce who is at the front door.

Glad you got it working! Node-Red MQTT supports wildcards right? Could you just subscribe to double-take/matches/+ and update the same HA front door entity from there? Then you could use an events: state node and watch for changes to know when it detected a new person.

Just added a Feature Request on your GitHub to support zones from Frigate if possible.

1 Like

Working on what you’re describing now. Im new to node-red and mqtt/json, but I think Im about halfway through getting it where I want.

I have a zone in frigate for front porch, so I want to get notifications from double-take when it finds a match on front porch zone, for any trained face in double-take, without creating an entity for every visitor/friend.

Sounds like a great idea to add in Frigate zone support! Happy to get something like this worked into the code.

I posted some ideas on the github issue you replied to. But I’m thinking if you have one HA entity for the frontdoor then listen for matches on a topic like double-take/frontdoor that could solve your issue. The HA entity would then have the name for the last known person at the frontdoor and you could create automations around every time it changes to send to a TTS service.

I’d have to update the code to publish to a topic with the camera name, but if you think that would work for you, then I can start to work on that feature.

Having a camera topic would be perfect. I appreciate all your hard work.

I’ve updated the beta build to include this ability and would love to hear if it works for you or if you’d like anything adjusted; mainly wondering what you’d like the MQTT topic payload to look like if there’s multiple users found. I also updated the UI to allow users to easily filter results by the criteria they care about.

Hi Jako, I have installed Double Take and used it with Deepstack. I still have quit some issues with recognition. I’m afraid I might do something wrong. Step I took:

  1. Installed Double Take, Deepstack,
  2. Trained Deepstack via images in Double Take folder (each person has it’s own folder)
  3. started using the tools,
  4. I trained every person that got caught via the UI. I see no improvements. It even gets worse. Is training additional of does is replace previous trained images?

Could you help me out. Some ideas that I have, but dont know if they work:

  • increase resolution of camera,
  • Add more private fotos to the folders and train,

Any tips or trick I could use?

Hey @phaeton, thanks for checking out the project. What are the dimensions of the bounding boxes of the faces that you are sending to DeepStack for training? I’ve found when I give it a bunch of images with lower res images / small bounding boxes of the face that my results seem to have more false positives as well. Is this similar to what you are experiencing?

How many photos do each of the users have?

hello jako can you help me
I installed FRIGATE and DEEPSTACK and DEEPSTACK_OBJECT
all mail worked but I cannot find the training file anywhere
here is my configuration file
volumes:
- $ {PWD} /. Storage: /. Storage

Did you try using the UI to create a training folder? If you mount the .storage directory like you did, you should also be able to create one in /.storage/train/:name

1 Like

Hi @Jako,

The bouding box seems quit small. 43x61 and 75x102 are two examples of wrong recognized people. I have decreased the resolution of the streams. I can increase, what is a minimum resolution we need in the boudning box?

There was a discussion about this on one of the issues. It seems like 160x160 is the recommended resolution for CompreFace. I only have 1920x1080 streams and my cameras sometimes get small bounding boxes when I’m far away. I’m trying not to use these images when training the model and only using larger sized bounding boxes to see if that helps with the detection.

1 Like

Ow. well, I’ll pump up the resolution a bit more than. Thanks, I’ll also check the link.

1 Like

thanks jako so much i have tried creating the /.storage/train/ file in my docker
and when I access the user interface to create the training directory
then nothing

Do you have Frigate setup? Or have you passed any images to the /api/recognize endpoint?