@Jako I have a camera that has 2 streams and I’ve set the detect stream to lower quality as recommended in the Frigate docs.
When a match is detected it’s using this low quality image… confirmed by going to http://<ip_of_frigate>/api/amcrest/latest.jpg
The higher quality stream is used when Frigate records the event.
How can I get Double Take to use an image from the higher quality stream?
Just wanted to say this is amazing…got it setup and working in a couple hours of fiddling with over the weekend (real easy with the HA addons and compreface addon)… Thanks!
Is there a way to seperate the configuration files (and other items needed to retrain quickly) from the double take application (ie store the config.yml file in the HA directories like frigate does)…
The goal would be to exclude the addon from the backups…but if i need to restore, i install the addon and point it to where the config file is which is in my Home Assistant directory that gets backed up regularly and im up and running again quickly
The reason for this is that Compreface and Doubletake combined take over 2.2GB combined , and the rest of my backup is about 200mb. (This architecture works real well with frigate when i need to restore). It helps with saving cloud storage space and local space on the HA server
I was toying with the following automation. Frankly I was not thinking of running it full time but it was a work of interest. The problem is every so often the automation triggers when myself or partner are recognised on other cameras… and only occasionally. Can anyone see an issue with this automation or perhaps suggest a better way of doing it.
# ============================================
- id: "1122334455"
alias: unlock backdoor for jon or clare
trigger:
- platform: mqtt
topic: double-take/cameras/rear
condition:
- condition: state
entity_id: lock.back_door
state: locked
- condition: or
conditions:
- condition: template
value_template: '{{trigger.payload_json["matches"][0]["name"] == "jon"}}'
- condition: template
value_template: '{{trigger.payload_json["matches"][0]["name"] == "clare"}}'
action:
- service: lock.unlock
entity_id: lock.back_door
I have a Reolink camera with build-in person detection. I really like this set-up but it feels a bit overkill to use Frigate to run the initial detection and rather let my camera handle it. I have the camera create a snapshot from a person detected image. I would like to set it up similar to double take, but not sure how to proceed. Anybody else having the same set-up and could elaborate how to do this? Of course open to other suggestions (e.g. continue using Frigate).
Okay, I was able to move it forward using the camera snapshot option and read the output from MQTT. I am just not able to figure out how to put if detecting multiple persons in a sensor
Obviously, the [0] can also be [1]. I would like the sensor to have the state: ‘Name1, Name2’ if 2 persons have been detected. And just ‘Name1’ if one person was detected.
I was trying this, but did not yet succeed:
{% set result = namespace(names=[]) %}
{% for i in range(0, 5) %}
{% if 0 <= value_json['matches'][i]['name'] %}
{ set result.names = result.names + value_json['matches'][i]['name'] %}
{% endif %)
{% endfor %} {{ result.names|join(', ') }}
And I don’t want to overcomplicated, but ideally (as this is a camera at the entrance of my house) I would like to use 2-3 snapshots merged so that I get a full list of people entering.
I have noticed that the errors above are triggered due to the .db getting full … depending on how your setup is … so what I`ve done to fix is I did a .cron job to restart double_take container every night at a fixed time and delete the .db file this prevents the errors in the UI.
so I notice that it detects random people and says it’s one of the trained faces with like 90% confidence, is there a way to say that it’s not that person and it’s an ‘unknown’ instead or maybe say its not that person?
Yeah I’ve got one from yesterday with a snapshot of 2 tradies who were working on my house. It detected three faces, one being one of the guys hands and all were given a different name from my trained faces. I’m pretty sure the 98% confidence one is the hands and matched to my toddler.
I’ve not had an unknown face appear for a long time, it always picks someone from the faces list. Today i’ve decided to increase my confidence level to 80 and the unknown to 60 to see if there’s any change.
Is it best practice to train using images from the cameras or use high quality “selfies” with profile shots for multiple angles?
Double take is awesome such amazing work thankyou.
I’m using compreface and getting matched results confirmed green and high hit rates.
I have a slight issue however I have no sensors in home assistant ? Not even one for my camera, no entities with a double or take in its name. I have fully rebooted and reinstall double take and frigate but nothing shows up any help please.
How to write the conditions for unlocking the door when someone is detected? For example double_take_someone。
When someone is identified, trigger how to write. Give an example.
frigate:
url: http://xxx.xxx.xxx.xxx:xxxx
# object labels that are allowed for facial recognition
labels:
- person
attempts:
# number of times double take will request a frigate latest.jpg for facial recognition
# 10
latest: 10
# number of times double take will request a frigate snapshot.jpg for facial recognition
snapshot: 0
# process frigate images from frigate/+/person/snapshot topics
mqtt: true
# add a delay expressed in seconds between each detection loop
delay: 0
image:
# height of frigate image passed for facial recognition
height: 1080
# global detect settings (default: shown below)
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 60
# hours to keep match images until they are deleted
purge: 168
# minimum area in pixels to consider a result a match
min_area: 3000
unknown:
# save unknown images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed before classifying a name as unknown
confidence: 40
# hours to keep unknown images until they are deleted
purge: 24
# minimum area in pixels to keep an unknown result
min_area: 0
Just been playing and I cannot find a way of doing it.
Once option would be to untrain and then retrain the same images using the new name. While it sounds a pain should only take a minute to do.