Facial recognition & room presence using Double Take & Frigate

sorry will test again… didn’t try reaching the weui… once it happens again i will test if webui is still working ok?

Sounds good! Even when I don’t use Double Take, Frigate throws a lot of errors. Just want to see if Frigate’s UI crashes, which hasn’t happened for me now since setting snapshot to 0.

Hope together we can find the issue and make frigate compatible to double take :wink:

I’ve just started using this and it works great!

I have a particular plan in mind for it, can you please let me know the best way to achieve this?

I’m hoping that I can train all the people I would expect to visit my house, then get a notification when someone ‘unknown’ is spotted on any of my outside camera. Is there an easy way to do this? I’m using Compreface and it seems to label these 'unknown’s as famous people but with a low confidence score. Is it possible to return an ‘unknown’ if the face isn’t in the database at all?

Thanks!

Jonathan

1 Like

Thanks for the kind words @jonnyrider! Glad you find it useful.

Right now matches are published to this MQTT topic, double-take/matches/:name, what if I also published one for unknown faces? You’re right, CompreFace always seems to return a name if there’s at least one in the set. I do save each result with a match property though, which is just a boolean if the confidence level for the match is above the desired threshold. I would most likely just use this value to publish to the double-take/matches/unknown topic and have a similar payload to the others. Would something like this work?

{
  "id": "1614931108.689332-6uu8kk",
  "duration": 0.85,
  "time": "03/05/2021 02:58:57 AM",
  "attempts": 4,
  "camera": "living-room",
  "room": "Living Room",
  "match": {
    "name": "david",
    "confidence": 42.6,
    "attempt": 1,
    "detector": "compreface",
    "type": "latest",
    "duration": 0.39
  }
}

Definitely, I think if a face has a confidence level of less than, say, 40% (maybe this can be user defined?) then it could trigger a new attribute marking them as unknown.

Or maybe I could just use the currently available confidence level to create an automation that does what I need!

I like that idea. The current confidence level determines if it’s a match or not, then the next determines what name to use. If below it’ll change the name to ‘unknown’.

I will need to make an update to publish the unknown requests to a MQTT topic, but I can start on that tonight.

Hey! I pushed a new beta build that includes publishing the unknown result to the topic double-take/matches/unknown. I may want to slightly rethink this since the MQTT publishing only happens when the entire event is done. So in this case it took 14 seconds to process all 10 attempts before giving up. Ideally I publish this in real time as it’s happening for unknowns.

Let me know what you think.

This is what the current payload looks like.

{
  "id": "1620192829.86429-72zjbm",
  "duration": 13.95,
  "timestamp": "2021-05-05T01:34:30.631-04:00",
  "attempts": 10,
  "camera": "living-room",
  "zones": [],
  "unknown": {
    "name": "unknown",
    "confidence": 0,
    "match": false,
    "box": { "top": 289, "left": 840, "width": 59, "height": 73 },
    "type": "latest",
    "duration": 0.83,
    "detector": "deepstack",
    "filename": "04a1ef35-7f56-4850-9bdf-4dc46537ee75.jpg"
  }
}

Great, thanks. I’ll have a look tonight!

I’m trying to get Double Take working with Deepstack, I can see through the docker logs Double Take is picking up the image from Frigate:

processing front_door: 1620261028.307683-32nrjc

Checking the resource monitor and logs on Deepstack I can see Double Take is sending the file over to Deepstack which processes the request apparent without any errors:

[GIN] 2021/05/06 - 00:45:52 | 400 |  6.693617507s |      172.17.0.1 | POST     /v1/vision/face/recognize

However, I then get an error in Double Take:

deepstack process error: Cannot read property 'map' of undefined
Cannot read property 'duration' of undefined

Any suggestions?

EDIT: Looks like the issue might be at the Deepstack side, found this error in the HA logs. That said still looking for suggestions :slight_smile: :

[custom_components.deepstack_face.image_processing] Depstack error : Error from Deepstack request, status code: 400

Thanks for trying out my project. Have you restarted DeepStack to see if that clears it up?

I have the error on my side caught in my current beta build, but that just means the response from DeepStack wasn’t what was expected.

For the life of me could not get it to work, however decided to move Deepstack from my NAS onto the same box as HA. It’s now working! :slight_smile: Thx.

1 Like

I’ve just had a look to see what would happen when a delivery driver came to the door, but I don’t think it’s working as expected:

2021-05-06T15:27:14.050Z
processing front: 1620314832.525425-bldqaf
response:
{
  id: '1620314832.525425-bldqaf',
  duration: 6.54,
  timestamp: '2021-05-06T15:27:20.593Z',
  attempts: 11,
  camera: 'front',
  zones: [],
  matches: []
}
2021-05-06T15:27:20.594Z
done processing front: 1620314832.525425-bldqaf in 6.54 sec

It hasn’t marked the match as ‘unknown’, there just isn’t anything listed.

Am I doing something wrong?

Cheers

1 Like

Do you have SAVE_UNKNOWN: 'true' in your docker-compose file?

I do:

docker run --name=double-take -p 3121:3121 -e PORT=3121 -e DETECTORS=compreface -e COMPREFACE_URL=http://192.168.0.102:8000 -e COMPREFACE_API_KEY=00000000-0000-0000-0000-000000000002 -e FRIGATE_URL=http://192.168.0.102:5000 -e MQTT_HOST=192.168.0.102:1883 -e MQTT_USERNAME=xxxx -e MQTT_PASSWORD=xxxx -e SAVE_UNKNOWN=true jakowenko/double-take:beta

Did the UI show anything when you loaded it up?

One more thing to try, hit the API directly with an image of an unknown face. You can pass a flag &results=all, which will output everything from the event.

http://localhost:3121/api/recognize?url=https://image.shutterstock.com/image-photo/isolated-shot-young-handsome-male-600w-762790210.jpg&results=all

The UI only seems to show faces that are recognised. I put the URL in as above and got:

{"id":"072f6133-76bd-4be1-9094-bcfc7776c82c","duration":14.61,"timestamp":"2021-05-06T20:13:47.143Z","attempts":1,"camera":"double-take","zones":[],"matches":[{"name":"jared leto","confidence":81.32,"match":true,"box":{"top":35,"left":231,"width":135,"height":176},"type":"manual","duration":13.47,"detector":"compreface","filename":"1b2e1cfc-ce3d-42fc-9c80-ec673672ba5c.jpg"}],"results":[{"duration":14.61,"type":"manual","attempts":1,"results":[{"detector":"compreface","duration":13.47,"attempt":1,"results":[{"name":"jared leto","confidence":81.32,"match":true,"box":{"top":35,"left":231,"width":135,"height":176}}],"filename":"1b2e1cfc-ce3d-42fc-9c80-ec673672ba5c.jpg"}]}]}

Sorry, not sure what’s the matter with it, but I’m happy to debug!

How did it detect Jared Leto? Does CompreFace have some sort of celebrity model that I’m not aware of?

Do you get the same result when you upload that photo to the CompreFace UI? This is my result.

It appears so!

I’ve completely removed all the training on my known faces, and putting through a photo of Prince brings back Leonardo DiCaprio!

Do you find that Deepstack works before then Compreface? I’ve used Deepstack in the past so may use that instead to see if the recognition is better.

Thanks

Weird. When I created a brand new CompreFace API key and put a photo through it didn’t detect any celebrity names. Have you tried a fresh application / service within CompreFace to see if it returns the same results?

I’ve found CompreFace to work a little better, but I have only a few images right now for the training set for each and I’m only trying to get it high quality photos. I’ve been running both detectors side by side to test their performance.

The UI makes it easy to tell which detector is working better.

Screen Shot 2021-05-07 at 12.23.20 PM