Double Take
Unified UI and API for processing and training images for facial recognition.
Why?
There’s a lot of great open source software to perform facial recognition, but each of them behave differently. Double Take was created to abstract the complexities of the detection services and combine them into an easy to use UI and API.
Features
- Responsive UI and API bundled into single Docker image
- Ability to password protect UI and API
- Support for multiple detectors
- Train and untrain images for subjects
- Process images from NVRs
- Publish results to MQTT topics
- REST API can be invoked by other applications
- Disable detection based on a schedule
- Home Assistant Add-on
Supported Architecture
- amd64
- arm64
- arm/v7
Supported Detectors
- DeepStack v2021.02.1-2021.06.01
- CompreFace v0.5.0-0.5.1
- Facebox
Supported NVRs
- Frigate v0.8.0-0.9.0
Use Cases
Frigate
Subscribe to Frigate’s MQTT topics and process images for analysis.
mqtt:
host: 192.168.1.1
frigate:
url: http://192.168.1.1:5000
When the frigate/events
topic is updated the API begins to process the snapshot.jpg
and latest.jpg
images from Frigate’s API. These images are passed from the API to the configured detector(s) until a match is found that meets the configured requirements. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found.
When the frigate/+/person/snapshot
topic is updated the API will process that image with the configured detector(s). It is recommended to increase the MQTT snapshot size in the Frigate camera config.
cameras:
front-door:
mqtt:
timestamp: False
bounding_box: False
crop: True
height: 500
If a match is found the image is saved to /.storage/matches/${filename}
.
Home Assistant
Trigger automations / notifications when images are processed.
If the MQTT integration is configured within Home Assistant, then sensors can be created from the topics that Double Take publishes to.
sensor:
- platform: mqtt
name: David
icon: mdi:account
state_topic: 'double-take/matches/david'
json_attributes_topic: 'double-take/matches/david'
value_template: '{{ value_json.camera }}'
availability_topic: 'double-take/available'
Github
Docker Hub
https://hub.docker.com/r/jakowenko/double-take
I’ve been trying to come up with a room presence solution for the past few months and recently created a project that’s working very well for me.
Prior to my solution I’ve tried using beacons, BLE, and a few other options. These methods did not produce the results I was looking for or required the user to have their phone or some other device on them. In a perfect world, the user wouldn’t have wear or do anything, right? Well what about facial recognition?
I recently started using Frigate which allowed me to detect when people were in a room, but what if I had friends or family over? I needed a way to distinguish each person from the images Frigate was processing. This led me to looking at Facebox, CompreFace, and DeepStack. All of these projects provide RESTful APIs for training and recognizing faces from images, but there was no easy way to send the information directly form Frigate to the detector’s API.
I tried using Node-Red and built a pretty complicated flow with retry logic, but it quickly became painful to manage and fine-tune. Being a developer I decided to move my Node-Red logic over to it’s own API, which I then containerized and named Double Take.
Double Take is a proxy between Frigate and any of the facial detection projects listed above. When the container starts it subscribes to Frigate’s MQTT events topic and looks for events that contain a person. When a Frigate event is received the API begins to process the snapshot.jpg
and latest.jpg
images from Frigate’s API. These images are passed from the API to the detector(s) specified until a match is found above the defined confidence level. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found. When a match is found a new MQTT topic is published with the results. This then allowed me to have a two node flow in Node-Red for taking the results and pushing them to a Home Assistant entity.
Double Take can also use multiple detectors at the same time to try to improve the results. From my testing at home I’ve found CompreFace and DeepStack to produce the best results, but I’ve also added support for Facebox. If you don’t use Frigate, then you can still utilize the Double Take API and pass any image to it for facial recognition processing.
I would love feedback on Double Take if anyone tries it or hear about any feature requests! I’ve been using this method for a few weeks now with excellent results.