Not sure… you do have the add-on running with no error, correcr? Here’s what I see:
…then click on the create account link, etc.
Not sure… you do have the add-on running with no error, correcr? Here’s what I see:
Exadel CompreFace addon log error AVX not detected
I would take that to mean your CPU is not supported therefore it won’t start.
I am having a hard time connecting to my compreface. I have it up and running in a docker container but when connecting, I am getting this error. connect ECONNREFUSED. Any idea why?
I have Homeassistan on Intel NUC, How can i use CompreFace without change NUC. Have you any suggestion ?
Not sure of your overall hardware situation, but if you are running HAOS on the NUC then there is an addon you can use.
Check the first post in this thread for the link.
However if this is the same machine you were using before that doesn’t have the CPU instruction set required then it won’t do much good. Need to have the right hardware.
Great work! Bravo!!!
It seems to me that the effort and the time that you have put on this is enormous!
And for us, the mortals, many times we just don’t really pay attention and realize what it takes to build something like this! So, honestly, congratulations!!!
Anyway I have setup everything.
Frigate and Deepstack.
I am using a Qnap NAS with docker for it.
Then after I found about Everything Smart Home video tutorial, I decided to install Double take too…
It works fine.
But many times it gives me wrong matches.
The problem is that it gives me wrong names, even though the confidence level is much lower that then one on file.
Here is my config for a match :
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 80
# hours to keep match images until they are deleted
purge: 72
# minimum area in pixels to consider a result a match
min_area: 10000
But I get a wrong name, even if the confidence is e.g. 40% and the box is 484 !!! ( < 10000)
So my question is if this is normal ??
Can I just force “unknown” if the above criteria don’t met ???
Hi,
Is there a way to remove old matches from the double-take matches folder, like the retain configuration for in frigate?
Cheers!
I’ve got Double Take along with Frigate and my Ring Doorbell successfully setup, Ring via Ring-MQTT and use the published topic on motion but does anyone know how you can have the image from Ring to zoom in on the face so that the recognition is far better? I can run ring-mqtt RTSP via Frigate but it isn’t designed for that as it goes via the Cloud/AWS so 24x7 streaming clogs up the Ring doorbell thus if someone presses it doesn’t work but what it does do because it is via Frigate it will zoom in on the face so looking to achieve the same either via Frigate but not streaming or direct into Double Take via a topic.
Cheers
Sorry for the noobie question.
I am trying to configure CompreFace with DoubleTake, but I am not able to find any documentation.
Can anyone help point me in the right direction?
Do you have CompreFace already installed and running? If yes, and you are asking for “how to connect CF with DT” only (as I understood), then answer is simple. Just add following to your Double Take config:
detectors:
compreface:
url: <your http://url:port here>
key: <your CompreFace application API key here>
Otherwise, if you are asking for CompreFace installing, this is little bit complicated (but not nuclear physics )
I’m running HA and all other apps for home automation as docker containers. For all of them, I’m using docker-compose method and I’m not using Portainer at all, so I followed this for CompreFace installation. Since my home server is running on Archlinux, I followed “To get started (Linux, MacOS)” part on mentioned link, but it is basically the same for Windows as well, main difference is that you will run command from CMD or Power Shell on Windows instead of Terminal.
Guys, quick question: Is it possible to add an IP camera to Double Take without Frigate installation.
In certain:
I’m aware that most probably I’m asking silly question, because something should trigger event for face recognition and should pass snapshot to Double Take, but if this isn’t possible, please ignore my setup and let’s focus on main question: Is it possible somehow to pass snapshots from IP camera to Double Take, without involving Frigate?
Thx
Thanks @stiw47 !
I managed to figure it out. What I was missing is how to generate the API key. I didn’t realize the CompreFace url contains a full project interface when accessed directly
Hey @stiw47 ,
From my understanding DoubleTake can only process still images. So it will not be able to process a direct RTSP stream. What Frigate essentially does in that relationship is process the RTSP stream and pull out still images for DoubleTake to process. Something has to provide that service for DoubleTake.
Yup, I figured out already the part that DoubleTake will not process the stream, it will only process still images. Want to avoid Frigate because as I understood until now, it is not solution without having Coral device (or maybe I’m wrong?). As for now, I have deepstack face and deepstack object custom integrations. This can be triggered by the service. When deepstack’s image_processing.scan
service is triggered, this producing new images in my folder: /root/docker-compose-hass/config/deepstack-snapshots/
. Latest face detection snapshot is updating when service is triggered, so I always have latest face for certain camera on e.g.: /root/docker-compose-hass/config/deepstack-snapshots/deepstack_face_camera_hall_latest.jpg
. So at the end, I ended up by:
mount -o bind /root/docker-compose-hass/config/deepstack-snapshots/ /usr/share/nginx/faces/
So, I can have snapshots folder mounted to location where Nginx has access permissions. Later, I made Nginx conf so I can get latest snapshot on e.g.
http://192.168.0.21:8083/deepstack_face_camera_hall_latest.jpg
I hope I will be able to make some trigger for DoubleTake, so when I trigger image_processing.scan
service, DoubleTake checks image on http://192.168.0.21:8083/deepstack_face_camera_hall_latest.jpg
via configuration inside DoubleTake:
I still don’t know is this doable, this is completely mess for now, and now is already New Year eve, so I already gave up for today several hours ago, writing this just cause I saw notification for your reply - HAPPY NEW YEAR
@stiw47 I don’t use Frigate either. In my case, the CCTV snapshots are coming from Blue Iris.
I setup this REST service in configurations.yaml:
rest_command:
doorbell_face_detection:
url: http://192.168.11.213:3001/api/recognize?url=http://192.168.11.211/image/Doorbell%3f%26decode=-1&attempts={{attempts}}&camera=manual
method: GET
***Replace 192.168.11.213:3001
above with your DoubleTake instance.
***Replace http://192.168.11.211/image/Doorbell%3f%26decode=-1
with your camera snapshot URL.
Via automation, upon motion or sensor event, this rest service is called to instruct DoubleTake to do face recognition using the snapshot URL, trying X number of attempts.
Study the Double Take APIs found here
You are my hero, thank you very much, this is exactly what I was looking for.
I was try to see would it be able to extract the snapshot from live stream (with direct URL of camera stream):
rest_command:
hall_face_detection:
url: http://192.168.0.21:83/api/recognize?url=http://192.168.0.141&attempts=5&camera=manual
method: GET
But no luck, so I ended with:
rest_command:
hall_face_detection:
url: http://192.168.0.21:83/api/recognize?url=http://192.168.0.21:8083/deepstack_face_camera_hall_latest.jpg&attempts=5&camera=manual
method: GET
And it is working. Latest snapshot appears on DoubleTake UI Matches page.
Thanks and Happy New Year!
I’m finding that only compreface is successfully matching faces. Rek and Deep are not. Deep just comes back with no match and Rek keeps erroring with there are no faces in the image. Has anyone had issues like this?
{
"detector": "rekognition",
"duration": 3.97,
"name": "unknown",
"confidence": null,
"match": false,
"box": {
"top": 1014,
"left": 131,
"width": 1124,
"height": 644
},
"createdAt": "2023-01-02T23:43:04.246Z",
"updatedAt": null
}
{
"detector": "deepstack",
"duration": 0.61,
"name": "unknown",
"confidence": 0,
"match": false,
"box": {
"top": 1229,
"left": 663,
"width": 494,
"height": 507
},
"checks": [
"confidence too low: 0 < 40"
],
"createdAt": "2023-01-02T23:43:04.246Z",
"updatedAt": null
}
Just started using Double Take as my Frigate install had been stable for a while and I freed up some more resources.
Are there any guides on the best methods for training / tuning?
I am currently using just CompreFace as it seems to be better than DeepStack however the main issue I am having is that CompreFace seems to think everyone MUST be one of the trained people and marks strangers with high certainty they are part of the training set. (essentially all delivery people are getting labeled as family members)
I started out with high rest mug shots (one time ask for everyone to pose) of each family member looking forwarded, up, down , left, right then started adding more data from Camera samples… However it seems that if you add low res images (distance, dark, poor res) it broadly starts matching everyone. Or worse profile pictures seem to start matching ears of everyone as a face.
Also the default setup takes several pictures from frigate from snapshot and latest which do not seem to update with high frequency so they are basically all the same image. Also snapshot and MQTT are zoomed / cropped object images and latest is the full frame. This makes tuning the doubletake setting of “min_area” inconsistent. Most of the time it seems like latest is just the uncropped version of the last MQTT or snapshot anyway so I turn off latest since it is always inconstant vs the other two picture types. I would recommended there being a separate min area PER image source type.
Amazing project BTW, makes setting this up so much easier, I am just not sure I can get any reliability utility out of the results without massively upgrading the quality of all the cameras in my house / sorting out why CompreFace insists strangers look 90%+ like someone in the trained family member set.
Hello!
Anyone know how to delete the MQTT entities (not using them anymore) that are created by DoubleTake? I’m unable to remove them even after uninstalling Double Take.