Facial recognition & room presence using Double Take & Frigate

Hey everyone, hoping for some feedback on this new feature I’ve been working on before I push a beta build.

To make this application more secure, I wanted to add basic authentication to it. This would allow users to safely run it behind a reverse proxy as well. I was going to have it enabled by default, unless others thing it shouldn’t be. It’s easy to turn off if you want as well.

Authentication Features:

  • Requires password to be set before using
  • All API routes except login require an JWT authorization token or long term access token to be passed
  • MQTT payloads will include a token that will allow HA notifications to still render the image
  • Ability to create / delete long term access tokens
  • Ability to disable authentication all together by adding auth: false to the config.yml

Screen Shot 2021-07-30 at 10.15.37 AM

Screen Shot 2021-07-30 at 10.15.48 AM

2 Likes

I published release v0.10.0 last night which includes the following.

Added

  • Ability to enable authentication, which adds /login, /logout, and /tokens routes to the UI
  • Ability to create access tokens so third party applications can authenticate with the API
  • Ability to remove trained folder if no trainings exist
  • Include token in published MQTT topics for accessing authenticated images
  • Ability to adjust CompreFace det_prob_threshold value
  • Web app standalone support with favicon

Changed

  • Use normal web history for UI instead of hash history
  • Increase wait time for API restarts
  • Drop VUE_APP_API_URL env in favor of window.location.origin
  • Revert to default publicPath when building UI

Fixed

  • Replace spaces in subject names when publishing to MQTT
  • Verify detectors are present on /config route when rendering status icons
3 Likes

Thank you so much the project is amazing
i like it so much i can teach how many faces
How many faces can be detected?

1 Like

Hey, thank you for the kind words. I’m glad you are finding it useful. The face detection limits would probably vary depending on which detector you are using. To my knowledge only Facebox has some limits for how many faces you can train. CompreFace and Deepstack don’t have any limits to my knowledge.

Here’s a sample image I processed with a bunch of people to show it can handle more than just a couple faces.

thanks jako for replying me
I am very satisfied with this system
i only use frigate and compreface-fe i dont use deepstack
identification results are quite fast

1 Like

I got Double Take up and running this week. This is one of the most underrated developments in progressing forward AI / facial detection. The GUI is just incredible and elegant, it’s very refreshing to see in the HA ecosystem.

2 Likes

I agree, unfortunately I can’t get deepstack to actually do a decent job at recognizing faces.
I’ve trained it with high quality images but it doesn’t match them up with what’s seen by the camera.

However, this isn’t Double Take’s fault.

1 Like

I have Facebox and compreface. Compreface works way better than facebox for face recognition

1 Like

i tried compreface but it doesn’t seem to work on a Synology NAS.

Hey @stizzi, thank you for the kind words. I’m glad you are enjoying the project. Please reach out if you ever run into any issues or have a feature request.

hello do you have a way for compreface-fe . system
Does it run automatically after every shutdown or power failure?
I have to go to portainer to run it manually. there are a lot of software that run automatically when power is restored

@Cao_Hoa make sure in your compose files you have restart: unless-stopped set for the containers you want to start back up. That should solve your issues during power failures / reboots.

Here’s the other options you can use: Compose file version 3 reference | Docker Documentation

Agreed! Double Take is awesome but I have been suitably unimpressed by Deepstack. Have trained it with 30+ high-res images and it still thinks the gf is me and I am her about 80% of the time, rendering the whole thing pretty unusable.

I tried installing Compareface but had some issues getting the container to work and gave up. Will need to give it another go and see if I can get better results that way…

thanks jako it worked for me
I like it very much it received very fast and accurate
did you create additional sensor binary_sensor
for homeassistant .because i find it fast but
If I go in and out, it doesn’t output and automate
inactive.have a nice day

Hey guys, can someone help me automate this?
when it detects a trained face play a music
When it can’t play, play another song
i did the following automation: when calling the test service there is only the track
the face doesn’t know how to play music and the other faces can’t
no music is playing

I just reinstalled v0.10.2 and got this error
please help me
EISDIR: illegal operation on a directory, read

@Cao_Hoa Sounds like your compose file has the volume mounts wrong. Can you post it?

thanks jako i got it
everything is ok
The problem of train ever is very simple
and much faster and more efficient than before

1 Like

@Jako Is it possible to re-run matching to an image under Matches?
For example, I have an image that seems to be clear but it doesn’t currently match.
If I upload and Train a new image for that person, I’d like to see if the selected image in the Match view would match with the newly added Trained image.

Hey @surge919, I know we chatted on discord about this, but wanted to also post an update here since I pushed a new feature to the UI to help with this.

Here’s a little gif demonstrating how I’m handling the reprocessing of the images. Currently you have to do it on an image by image basis. There is a button in the bottom right of every match image. Clicking this will reprocess that image and update the results.

In this gif you can see the first image was for an unknown user. I train it for myself, then reprocess the image to show it has the updated results.

2 Likes