Face and person detection with Deepstack - local and free!

Hi,

I’m using Deepstack for a while now, but I keep getting low res pictures for my snapshots :confused:
I’m using 4k cameras and getting horrible quality for the snapshots.
I’m using BI and running the process_imaging on that (so no direct rtsp to HA) could that be a problem? (very few pictures are with full res)

Thanks,
Didi

Hi,

I am running deepstack in a docker on Proxmox and HA in a VM. Deepstack works fine when running manual (using curl and returning the faces), but I am getting a 403 error when calling the image.processing.image command. See below the deepstack log, with the first the manual command and the second the call service command.

What am I missing? The state of image.processing stays unknown.

[GIN] 2021/07/18 - 22:49:19 | 200 | 462.093955ms | 192.168.2.200 | POST /v1/vision/detection
[GIN] 2021/07/18 - 22:49:58 | 403 | 52.38µs | 192.168.2.200 | POST /v1/vision/face/recognize

And when I try to register a face using (with detect_only=false):
service: image_processing.deepstack_teach_face
data:
name: Noor
file_path: /config/www/learn/noor/noor2.jpg

I get an unknown error response while deepstack shows a register:
[GIN] 2021/07/18 - 23:12:47 | 403 | 32.008µs | 192.168.2.200 | POST /v1/vision/face/register

Thanks!

There is no UI. To remove, you need to use the appropriate API commands

https://docs.deepstack.cc/face-recognition/#managing-registered-faces

While I’m here, is anyone actually getting any actual decent face recognition from Deepstack? I have been sending images from my doorbell to it, and basically, it’s a waste of time - most people get detected as me (even females that look absolutely nothing like me), so it’s a bit of a pity!

1 Like

I use docker, there is now command for it… Can i use JS via Node Red Function to manage it?

You can use this docker to manage deepstack.

works well, take a look

My experience has been similar.

Hi,

As far as I understand the activation of Deepstack is not needed anymore if you are using it on CPU or GPU. But I’m using it on NCS2 and now I can’t activate it. I think that the activation process generates some file which is stored locally and I want to find out where that file is or where the code responsible for generating the file is. Is there anyone who knows something about this?

Thanks!

is this component broken in 2021.7.4? I get error when i click check configuration in Home assistant? Anyone facing same issue?

image
EDIT: I removed the folder from custom_componets and reinstalled through HACS, seems to be ok now.

Hi, thanks robmarkcole for such a great custom component!
I have my install of deepscan working well with HA, but I was wondering if it’s possible to have differing detection areas for different targets on the same camera?
My use case is a driveway camera. I currently have the right 45% of image excluded as there is usually a car parked there, but there is a path in front that people will walk down to go to my front door. So essentially, I would like to have 100% of the image scanned for people, but only the left 55% for cars.

My current config looks like this

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.55.252
    port: 80
    api_key: !secret deepstack_key
    confidence: 70
    save_file_folder: /config/www/deepstack/
    save_file_format: jpg
    save_timestamped_file: True
    always_save_latest_file: True
    scale: 0.5
    roi_x_min: 0
    roi_x_max: 0.55
    roi_y_min: 0.1
    roi_y_max: 1
    targets:
      - target: vehicle
      - target: person
      - target: car
        confidence: 60
    source:
      - entity_id: camera.driveway

I’ve tried simply having a second entity using the same camera, but that doesn’t seem to work as I don’t get any hits.

Thanks!

I installed HA in docker on machine 1 and got deepstack with deepstackui working on machine 2.
I also installed hacs component for deepstack_object with below configuration.yaml:
image_processing:

  • platform: deepstack_object
    ip_address: 10.0.0.20
    port: 5000
    timeout: 10
    save_file_folder: /config/www/deepstack
    save_file_format: png
    save_timestamped_file: True
    always_save_latest_file: True
    targets:
    • target: person
      confidence: 70
      source:
      #- entity_id: camera.amcrest_mediaprofile_channel1_mainstream
    • entity_id: camera.garage_cam_wyze

now when i ‘call service’ for image processing and look at state of my deepstack entity its still marked as ‘unknown’, it does not recognize any face, and directory for image file is empty. On machine 2 on deepstack docker log i see a call is made from machine 1 with below entry in logs:

[GIN] 2021/07/27 - 20:26:50 | 401 | 38.901862ms | 10.0.0.50 | POST /v1/vision/detection

what can i be doing wrong here? Thanks

Looks like I managed to work my issue out. By adding a name: field after the source camera, i am able to create two disparate entities using the same camera which outputs two different named images.

Can you help me?
I have same error with deepstack-ui for face recognition only(with object i browse file good)

.
I run deepstack on synology and also add folder in synology follow instruction but when enter command with root it is not success
{“success”:false,“error”:“userid not specified”,“duration”:0}
deepstack log.
image
Please help!

Try to watch the video, maybe it will help you understand your mistake

Dear Divan,
Thank very much for your video. I watch more time and follow step by step but same error with face recognition.


Maybe I don’t understand in Russian. I descript my sequence follow your video as below
1- Install deepstack on synology with port 5500 and environment VISION-FACE & VISION-DETECTION = True

I receive log

2- install deepstack-ui with port 8501 and add environment DEEPSTACK_PORT = 5500 and DEEPSTACK_IP = 192.168.2.20 (IP of nas)

3- install deepstack-trainner with port 5550 and add DEEPSTACK_HOST_ADDRESS = http://192.168.2.20:5500

4- after get error with face recognition I add folder with one photo and enter command

(I disabled all container) I don’t know mistake where, Could you show me how to fix! thank again

1 Like
  1. Try deleting everything and installing it again
  2. Check your settings with my settings
Settings of the Deepstack server container





Settings of the Deepstack client container





Use these images
deepquestai_deepstack
robmarkcole_deepstack-ui

1 Like

I have made amendments to the integration. Now we not only determine the name, but also checks the confidence. Without confidence, it often determines incorrectly. For example, the camera recognized the face of user 1 and the confidence of the recognized face is not indicated, then deepstack can report that it is user 2, and this is already an error. Therefore, I added the condition if the username and confidence are above 70, then this is true, if if the username and confidence are below 70, then this is false.

Paste this code into the file image_processing.py

        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name in ['Divan', 'divan'] and confidence > 70:
            name = 'Диван'
        elif name in ['Oleg', 'oleg'] and confidence > 70:
            name = 'Олег'
        elif name in ['Michael', 'michael'] and confidence > 70:
            name = 'Майкл'
        elif name in ['Toni', 'toni'] and confidence > 70:
            name = 'Тони'
        elif name in ['Julianna', 'julianna'] and confidence > 70:
            name = 'Джулианна'
        else:
            name = 'unknown'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces
An example of if the confidence is below 70. The system will not report the user name and will show unknown

An example of if the confidence is above 70. The system will report the correct user name

Ниже перевод текст на русском языке

Развернуть

Я внес в интеграцию поправки. Теперь не только определяем имя, но и сверяет уверенность. Без уверенности определяет часто ошибочно. Например камера распознала лицо пользователя 1 и уверенность распознанного лица не указывается, то deepstack может сообщить, что это пользователь 2, а это уже ошибка. Поэтому я добавил условие если имя пользователя и уверенность выше 70, то это верно, если если имя пользователя и уверенность ниже 70, то это ложно.

Вставьте этот код в файлик image_processing.py

        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name in ['Divan', 'divan'] and confidence > 70:
            name = 'Диван'
        elif name in ['Oleg', 'oleg'] and confidence > 70:
            name = 'Олег'
        elif name in ['Michael', 'michael'] and confidence > 70:
            name = 'Майкл'
        elif name in ['Toni', 'toni'] and confidence > 70:
            name = 'Тони'
        elif name in ['Julianna', 'julianna'] and confidence > 70:
            name = 'Джулианна'
        else:
            name = 'unknown'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces

Пример того, если уверенность ниже 70. Система не сообщит имя пользователя и покажет неизвестно

Пример того, если уверенность выше 70. Система сообщит правильное имя пользователя

Hello, I had the same problem, I found the solution on Deepstack forum here.

It’s because the DETECTION endpoint is not activated. Run deepstack container with environmental variable “-e VISION-DETECTION=True”.

Yeah, this. Tried to train Deepstack with all kinds of images - high res, cam shots, selfies - and it’s just a mess. Thinks the gf is me and I am her like 80% of the time. Gets confused by people wearing glasses. It’s practically unusable.

Is there some kind of best practice on what sort of images should be uploaded for training? I’ve read that people have had better luck with Compareface, may have to give it a go…

I need your help. Thanks to one person who gave me an amazing option and who encouraged me to redo everything. This option is good because I will not need to go into the code of the deep stack_force integration itself, but I can do it using the Home Assistant.

As a result, I did this. I have created a sensor that outputs the names of recognized faces and I can also change the face recognition parameters via input_number. deepstack_confidence_face
image

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Names of identified persons'
        icon_template: mdi:face-recognition
        value_template: >
          {% set detect_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
          {% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
          {% set set_confidence = states('input_number.deepstack_confidence_face')%}
          {% if detect_face and confidence_face >= set_confidence %}
          {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
          {% else %}
          unknown
          {% endif %}

If desired, you can create an automation that will send pictures in telegram. In fact, you can implement many more automation options. For example, a husband and wife in the house. The husband came to the kitchen, the camera recognized the husband and turned on the coffee maker for him, and the wife came and turned on the kettle for her, or turned on the TV in the husband’s room, and the laptop or climate control for the wife. If the camera sees only the husband, then the climate will adjust to the husband, and if the wife saw the camera, then to the wife, and if they are together, then the climate is by agreement for two, etc. There are a lot of options.

alias: 'Process Data from Facial Recognition'
description: ''
trigger:
  - platform: state
    entity_id: image_processing.detect_face_eufy_camera
condition: []
action:
  - service: telegram_bot.send_photo
    data:
      file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
      caption: >
        {% if is_state('image_processing.detect_face_eufy_camera', 'unknown') %}
        {% else %}    
        *Someone's in the hallway:* {% set detect_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
        {% set confidence_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','confidence')| map(attribute='confidence') |
        join(', ') %} {% set set_confidence =
        states('input_number.deepstack_confidence_face')%} {% if detect_face and
        confidence_face >= set_confidence %} {{
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
        {% else %} unknown {% endif %}{% endif %}
      target: 11111111
      disable_notification: false
  - service: telegram_bot.send_photo
mode: single

I’m trying to implement the translation of names into Russian in the Home Assistant and so that it outputs a list. So that it was instead of Igor, Oleg, Masha = Igor, Oleg, Masha and changed depending on which person was identified. I was able to configure only for single names, i.e. only one name is translated

Let me clarify, with the edits in the integration in the python file, this works, but it can’t be made to work in Home Assistant

        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name in ['Divan', 'divan'] and confidence > 70:
            name = 'Диван'
        elif name in ['Oleg', 'oleg'] and confidence > 70:
            name = 'Олег'
        elif name in ['Michael', 'michael'] and confidence > 70:
            name = 'Майкл'
        elif name in ['Toni', 'toni'] and confidence > 70:
            name = 'Тони'
        elif name in ['Julianna', 'julianna'] and confidence > 70:
            name = 'Джулианна'
        else:
            name = 'unknown'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces

Here is my version, which works only if one name is recognized, and it does not work with several names

{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | list | join(', ') %}
{% set total_faces = state_attr('image_processing.detect_face_eufy_camera','total_faces') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in ["Igor" , "igor"]  and confidence_face > '60' %} 
  {% set names_list = "Игорь" %}
{% elif names in ['Oleg', 'oleg'] and confidence_face > '60' %} 
  {% set names_list = "Олег" %}
{% elif names in ['Masha','masha'] and confidence_face > '60' %} 
  {% set names_list = "Маша" %}
{% elif names in ['Marina','marina'] and confidence_face > '60' %} 
  {% set names_list = "Марина" %}
{% elif names in ['unknown'] %} 
  Неизвестное лицо
{% endif %}
{{ names_list }}
Текст на русском (Text in Russian)

Нуждаюсь в вашей помощи. Благодаря одному человеку, который подкинул мне потрясающий вариант и который побудил меня переделать все. Этот вариант хорош тем, что мне не нужно будет лезть в код самой интеграции deepstack_face, а можно сделать средствами Home Assistant.

face was recognized even with below than minimum confidence threshold.. how to avoid? · Issue #50 · robmarkcole/HASS-Deepstack-face · GitHub

В итоге сделал так. Создал сенсор, который выводит имена опознанных лиц и также я могу менять уверенность распознавания лица через input_number.deepstack_confidence_face
image

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Names of identified persons'
        icon_template: mdi:face-recognition
        value_template: >
          {% set detect_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
          {% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
          {% set set_confidence = states('input_number.deepstack_confidence_face')%}
          {% if detect_face and confidence_face >= set_confidence %}
          {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
          {% else %}
          unknown
          {% endif %}

При желании можно создать автоматизацию, которая будет слать снимки в телеграм. На самом деле можно реализовать гораздо больше вариантов автоматизации. Например муж и жена в доме. Муж пришел на кухню, камера распознала мужа и включила ему кофеварку, а пришла жена и включила ей чайник или в комнате мужу включила телевизор, а жене ноутбук или настройка климата. Если камера видит только мужа, то климат будет подстраиваться под мужа, а если камера увидела жена, то под жену, а если они вдвоем, то климат по договоренности для двоих и т.д. Вариантов куча.

alias: 'Process Data from Facial Recognition'
description: ''
trigger:
  - platform: state
    entity_id: image_processing.detect_face_eufy_camera
condition: []
action:
  - service: telegram_bot.send_photo
    data:
      file: /config/www/deepstack/snapshots/detect_face_eufy_camera_latest.jpg
      caption: >
        {% if is_state('image_processing.detect_face_eufy_camera', 'unknown') %}
        {% else %}    
        *Someone's in the hallway:* {% set detect_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
        {% set confidence_face =
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','confidence')| map(attribute='confidence') |
        join(', ') %} {% set set_confidence =
        states('input_number.deepstack_confidence_face')%} {% if detect_face and
        confidence_face >= set_confidence %} {{
        state_attr('image_processing.detect_face_eufy_camera','faces') |
        selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
        {% else %} unknown {% endif %}{% endif %}
      target: 11111111
      disable_notification: false
  - service: telegram_bot.send_photo
mode: single

Я пытаюсь в Home Assistant реализовать перевод имен на русский и чтобы это выводило списком. Чтобы было вместо Igor, Oleg, Masha = Игорь, Олег, Маша и менялось в зависимости от того какое лицо было опознано. Я смог настроить только для одиночных имен, т.е переводится только одно имя

Уточню, с правками в интеграции в файлике python это работает, но в Home Assistant не получается заставить работать

        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name in ['Divan', 'divan'] and confidence > 70:
            name = 'Диван'
        elif name in ['Oleg', 'oleg'] and confidence > 70:
            name = 'Олег'
        elif name in ['Michael', 'michael'] and confidence > 70:
            name = 'Майкл'
        elif name in ['Toni', 'toni'] and confidence > 70:
            name = 'Тони'
        elif name in ['Julianna', 'julianna'] and confidence > 70:
            name = 'Джулианна'
        else:
            name = 'unknown'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces

Вот мой вариант, который работает только если опознано одно имя, а с несколькими именами это не работает

{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | list | join(', ') %}
{% set total_faces = state_attr('image_processing.detect_face_eufy_camera','total_faces') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in ["Igor" , "igor"]  and confidence_face > '60' %} 
  {% set names_list = "Игорь" %}
{% elif names in ['Oleg', 'oleg'] and confidence_face > '60' %} 
  {% set names_list = "Олег" %}
{% elif names in ['Masha','masha'] and confidence_face > '60' %} 
  {% set names_list = "Маша" %}
{% elif names in ['Marina','marina'] and confidence_face > '60' %} 
  {% set names_list = "Марина" %}
{% elif names in ['unknown'] %} 
  Неизвестное лицо
{% endif %}
{{ names_list }}
1 Like