I am running deepstack in a docker on Proxmox and HA in a VM. Deepstack works fine when running manual (using curl and returning the faces), but I am getting a 403 error when calling the image.processing.image command. See below the deepstack log, with the first the manual command and the second the call service command.
What am I missing? The state of image.processing stays unknown.
And when I try to register a face using (with detect_only=false):
service: image_processing.deepstack_teach_face
data:
name: Noor
file_path: /config/www/learn/noor/noor2.jpg
I get an unknown error response while deepstack shows a register:
[GIN] 2021/07/18 - 23:12:47 | 403 | 32.008µs | 192.168.2.200 | POST /v1/vision/face/register
While I’m here, is anyone actually getting any actual decent face recognition from Deepstack? I have been sending images from my doorbell to it, and basically, it’s a waste of time - most people get detected as me (even females that look absolutely nothing like me), so it’s a bit of a pity!
As far as I understand the activation of Deepstack is not needed anymore if you are using it on CPU or GPU. But I’m using it on NCS2 and now I can’t activate it. I think that the activation process generates some file which is stored locally and I want to find out where that file is or where the code responsible for generating the file is. Is there anyone who knows something about this?
Hi, thanks robmarkcole for such a great custom component!
I have my install of deepscan working well with HA, but I was wondering if it’s possible to have differing detection areas for different targets on the same camera?
My use case is a driveway camera. I currently have the right 45% of image excluded as there is usually a car parked there, but there is a path in front that people will walk down to go to my front door. So essentially, I would like to have 100% of the image scanned for people, but only the left 55% for cars.
I installed HA in docker on machine 1 and got deepstack with deepstackui working on machine 2.
I also installed hacs component for deepstack_object with below configuration.yaml:
image_processing:
target: person
confidence: 70
source: #- entity_id: camera.amcrest_mediaprofile_channel1_mainstream
entity_id: camera.garage_cam_wyze
now when i ‘call service’ for image processing and look at state of my deepstack entity its still marked as ‘unknown’, it does not recognize any face, and directory for image file is empty. On machine 2 on deepstack docker log i see a call is made from machine 1 with below entry in logs:
Looks like I managed to work my issue out. By adding a name: field after the source camera, i am able to create two disparate entities using the same camera which outputs two different named images.
.
I run deepstack on synology and also add folder in synology follow instruction but when enter command with root it is not success
{“success”:false,“error”:“userid not specified”,“duration”:0}
deepstack log.
Maybe I don’t understand in Russian. I descript my sequence follow your video as below
1- Install deepstack on synology with port 5500 and environment VISION-FACE & VISION-DETECTION = True
I have made amendments to the integration. Now we not only determine the name, but also checks the confidence. Without confidence, it often determines incorrectly. For example, the camera recognized the face of user 1 and the confidence of the recognized face is not indicated, then deepstack can report that it is user 2, and this is already an error. Therefore, I added the condition if the username and confidence are above 70, then this is true, if if the username and confidence are below 70, then this is false.
faces.append(
{"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
)
if name in ['Divan', 'divan'] and confidence > 70:
name = 'Диван'
elif name in ['Oleg', 'oleg'] and confidence > 70:
name = 'Олег'
elif name in ['Michael', 'michael'] and confidence > 70:
name = 'Майкл'
elif name in ['Toni', 'toni'] and confidence > 70:
name = 'Тони'
elif name in ['Julianna', 'julianna'] and confidence > 70:
name = 'Джулианна'
else:
name = 'unknown'
names_list.append(name)
faces[0]['bounding_box']['names'] = ', '.join(names_list)
return faces
An example of if the confidence is below 70. The system will not report the user name and will show unknown
Я внес в интеграцию поправки. Теперь не только определяем имя, но и сверяет уверенность. Без уверенности определяет часто ошибочно. Например камера распознала лицо пользователя 1 и уверенность распознанного лица не указывается, то deepstack может сообщить, что это пользователь 2, а это уже ошибка. Поэтому я добавил условие если имя пользователя и уверенность выше 70, то это верно, если если имя пользователя и уверенность ниже 70, то это ложно.
faces.append(
{"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
)
if name in ['Divan', 'divan'] and confidence > 70:
name = 'Диван'
elif name in ['Oleg', 'oleg'] and confidence > 70:
name = 'Олег'
elif name in ['Michael', 'michael'] and confidence > 70:
name = 'Майкл'
elif name in ['Toni', 'toni'] and confidence > 70:
name = 'Тони'
elif name in ['Julianna', 'julianna'] and confidence > 70:
name = 'Джулианна'
else:
name = 'unknown'
names_list.append(name)
faces[0]['bounding_box']['names'] = ', '.join(names_list)
return faces
Пример того, если уверенность ниже 70. Система не сообщит имя пользователя и покажет неизвестно
Yeah, this. Tried to train Deepstack with all kinds of images - high res, cam shots, selfies - and it’s just a mess. Thinks the gf is me and I am her like 80% of the time. Gets confused by people wearing glasses. It’s practically unusable.
Is there some kind of best practice on what sort of images should be uploaded for training? I’ve read that people have had better luck with Compareface, may have to give it a go…
I need your help. Thanks to one person who gave me an amazing option and who encouraged me to redo everything. This option is good because I will not need to go into the code of the deep stack_force integration itself, but I can do it using the Home Assistant.
As a result, I did this. I have created a sensor that outputs the names of recognized faces and I can also change the face recognition parameters via input_number. deepstack_confidence_face
If desired, you can create an automation that will send pictures in telegram. In fact, you can implement many more automation options. For example, a husband and wife in the house. The husband came to the kitchen, the camera recognized the husband and turned on the coffee maker for him, and the wife came and turned on the kettle for her, or turned on the TV in the husband’s room, and the laptop or climate control for the wife. If the camera sees only the husband, then the climate will adjust to the husband, and if the wife saw the camera, then to the wife, and if they are together, then the climate is by agreement for two, etc. There are a lot of options.
alias: 'Process Data from Facial Recognition'
description: ''
trigger:
- platform: state
entity_id: image_processing.detect_face_eufy_camera
condition: []
action:
- service: telegram_bot.send_photo
data:
file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
caption: >
{% if is_state('image_processing.detect_face_eufy_camera', 'unknown') %}
{% else %}
*Someone's in the hallway:* {% set detect_face =
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
{% set confidence_face =
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','confidence')| map(attribute='confidence') |
join(', ') %} {% set set_confidence =
states('input_number.deepstack_confidence_face')%} {% if detect_face and
confidence_face >= set_confidence %} {{
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
{% else %} unknown {% endif %}{% endif %}
target: 11111111
disable_notification: false
- service: telegram_bot.send_photo
mode: single
I’m trying to implement the translation of names into Russian in the Home Assistant and so that it outputs a list. So that it was instead of Igor, Oleg, Masha = Igor, Oleg, Masha and changed depending on which person was identified. I was able to configure only for single names, i.e. only one name is translated
Let me clarify, with the edits in the integration in the python file, this works, but it can’t be made to work in Home Assistant
faces.append(
{"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
)
if name in ['Divan', 'divan'] and confidence > 70:
name = 'Диван'
elif name in ['Oleg', 'oleg'] and confidence > 70:
name = 'Олег'
elif name in ['Michael', 'michael'] and confidence > 70:
name = 'Майкл'
elif name in ['Toni', 'toni'] and confidence > 70:
name = 'Тони'
elif name in ['Julianna', 'julianna'] and confidence > 70:
name = 'Джулианна'
else:
name = 'unknown'
names_list.append(name)
faces[0]['bounding_box']['names'] = ', '.join(names_list)
return faces
Here is my version, which works only if one name is recognized, and it does not work with several names
{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | list | join(', ') %}
{% set total_faces = state_attr('image_processing.detect_face_eufy_camera','total_faces') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in ["Igor" , "igor"] and confidence_face > '60' %}
{% set names_list = "Игорь" %}
{% elif names in ['Oleg', 'oleg'] and confidence_face > '60' %}
{% set names_list = "Олег" %}
{% elif names in ['Masha','masha'] and confidence_face > '60' %}
{% set names_list = "Маша" %}
{% elif names in ['Marina','marina'] and confidence_face > '60' %}
{% set names_list = "Марина" %}
{% elif names in ['unknown'] %}
Неизвестное лицо
{% endif %}
{{ names_list }}
Текст на русском (Text in Russian)
Нуждаюсь в вашей помощи. Благодаря одному человеку, который подкинул мне потрясающий вариант и который побудил меня переделать все. Этот вариант хорош тем, что мне не нужно будет лезть в код самой интеграции deepstack_face, а можно сделать средствами Home Assistant.
В итоге сделал так. Создал сенсор, который выводит имена опознанных лиц и также я могу менять уверенность распознавания лица через input_number.deepstack_confidence_face
При желании можно создать автоматизацию, которая будет слать снимки в телеграм. На самом деле можно реализовать гораздо больше вариантов автоматизации. Например муж и жена в доме. Муж пришел на кухню, камера распознала мужа и включила ему кофеварку, а пришла жена и включила ей чайник или в комнате мужу включила телевизор, а жене ноутбук или настройка климата. Если камера видит только мужа, то климат будет подстраиваться под мужа, а если камера увидела жена, то под жену, а если они вдвоем, то климат по договоренности для двоих и т.д. Вариантов куча.
alias: 'Process Data from Facial Recognition'
description: ''
trigger:
- platform: state
entity_id: image_processing.detect_face_eufy_camera
condition: []
action:
- service: telegram_bot.send_photo
data:
file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
caption: >
{% if is_state('image_processing.detect_face_eufy_camera', 'unknown') %}
{% else %}
*Someone's in the hallway:* {% set detect_face =
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
{% set confidence_face =
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','confidence')| map(attribute='confidence') |
join(', ') %} {% set set_confidence =
states('input_number.deepstack_confidence_face')%} {% if detect_face and
confidence_face >= set_confidence %} {{
state_attr('image_processing.detect_face_eufy_camera','faces') |
selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
{% else %} unknown {% endif %}{% endif %}
target: 11111111
disable_notification: false
- service: telegram_bot.send_photo
mode: single
Я пытаюсь в Home Assistant реализовать перевод имен на русский и чтобы это выводило списком. Чтобы было вместо Igor, Oleg, Masha = Игорь, Олег, Маша и менялось в зависимости от того какое лицо было опознано. Я смог настроить только для одиночных имен, т.е переводится только одно имя
Уточню, с правками в интеграции в файлике python это работает, но в Home Assistant не получается заставить работать
faces.append(
{"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
)
if name in ['Divan', 'divan'] and confidence > 70:
name = 'Диван'
elif name in ['Oleg', 'oleg'] and confidence > 70:
name = 'Олег'
elif name in ['Michael', 'michael'] and confidence > 70:
name = 'Майкл'
elif name in ['Toni', 'toni'] and confidence > 70:
name = 'Тони'
elif name in ['Julianna', 'julianna'] and confidence > 70:
name = 'Джулианна'
else:
name = 'unknown'
names_list.append(name)
faces[0]['bounding_box']['names'] = ', '.join(names_list)
return faces
Вот мой вариант, который работает только если опознано одно имя, а с несколькими именами это не работает
{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | list | join(', ') %}
{% set total_faces = state_attr('image_processing.detect_face_eufy_camera','total_faces') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in ["Igor" , "igor"] and confidence_face > '60' %}
{% set names_list = "Игорь" %}
{% elif names in ['Oleg', 'oleg'] and confidence_face > '60' %}
{% set names_list = "Олег" %}
{% elif names in ['Masha','masha'] and confidence_face > '60' %}
{% set names_list = "Маша" %}
{% elif names in ['Marina','marina'] and confidence_face > '60' %}
{% set names_list = "Марина" %}
{% elif names in ['unknown'] %}
Неизвестное лицо
{% endif %}
{{ names_list }}
The issue is resolved. He was lying right on the surface. A wonderful deepstack client from tomer/deepstack-trainer. Russian Russian names I was able to specify there and the recognized names in Russian began to be displayed in the sensor
I have improved the sensor that displays names and attributes
service: notify.notify
data:
message: ''
data:
photo:
file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
caption: 'Someone's in the hallway 📷: *{{ states.sensor.persons_names.state }}*'
Текст на русском (Text in Russian)
Вопрос решен. Он лежал прямо на поверхности. Замечательный клиент deepstack от t0mer/deepstack-trainer. Я смог там указать имена на русском языке и в сенсоре стали отображаться опознанные имена на русском