Face and person detection with Deepstack - local and free!

wow, @DivanX10 , you made such a contribution to this topic, I greatly appreciate it!

My problem is that it was such a long time ago when I tried to pull this together, and since I’m far from a coder, simply an enthusiast, I’d really appreaciate if you could create a short write up from the very beginning, how to properly pull this thing together. Only if you don’t mind. Nevertheless, I’ll try to redo it from scratch based on your input, but if you can make my journey just slightly easier, that’d be fantastic. No rush, no pressure, only if you feel like, I think others would also benefit from it. But kudos for all your previous contribution, it’s already a big help to noobs like me :slight_smile:

@DivanX10 , if you allow me, I have one more question quickly about the trainer, more specifically about training images. I tried before with image training with many methods, and I’ve came to the realisation that the key to successfully teaching a person’s images is to upload MANY pictures in one batch, otherwise only the LAST image uploaded (sent to deepstack) will be the latest and ONLY reference image.

Did you notice that is it the same with this trainer, as well, or you can slowly upload images one by one over a period of time as it accumulates more images of a given person?

thx in advance!

I use packages.
Reading the article Convenient configuration (configuration) Home Assistant.
This is written in configuration.yaml as follows

homeassistant:
  packages: !include_dir_named includes/packages # We place any configurations in the folder packages

How to make a sensor with attributes?

We read about creating a sensor with attributes here

You can create a sensor and control not only the content of the state, but also what is contained in the attributes. Here is an example:

input_text:
  t_state:
  t_attribute:

sensor:
  - platform: template
    sensors:
      my:
        value_template: "{{ states('input_text.t_state') }}"
        attribute_templates:
          a: "{{ states('input_text.t_attribute') }}"

In the packages folder, we create a file with any name, I called face_identify_deepstack_sensor.yaml

In the face_identify_deepstack_sensor.yaml file, we insert the code, where image_processing.detect_face_eufy_camera is the name of my camera (the deepstack config is below) and I also use the auxiliary element number input_number. deepstack_confidence_face. It allows you to adjust the confidence of facial recognition through lovelace. Instead of image_processing. detect_face_beauty_camera, you need to specify the name of your camera

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Names of identified persons'
        icon_template: mdi:face-recognition
        value_template: >
          {% set detect_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
          {% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
          {% set set_confidence = states('input_number.deepstack_confidence_face')%}
          {% if detect_face and confidence_face >= set_confidence %}
          {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
          {% else %}
          unknown
          {% endif %}
        attribute_templates:
          faces: "{{ state_attr('image_processing.detect_face_eufy_camera','faces') }}"
          total_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','total_faces') }}"
          total_matched_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','total_matched_faces') }}"
          matched_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','matched_faces') }}"
          last_detection: "{{ state_attr('image_processing.detect_face_eufy_camera','last_detection') }}"
          friendly_name: "{{ state_attr('image_processing.detect_face_eufy_camera','friendly_name') }}"
          device_class: "{{ state_attr('image_processing.detect_face_eufy_camera','device_class') }}"

Creating an auxiliary element number input_number.deepstack_confidence_face. If you do it through the GUI

If you do it via yaml. To the input_number.yaml file

deepstack_confidence_face:
  name: "Deepstack: Face Recognition Confidence"
  min: 40
  max: 100
  step: 1
  mode: slider
  icon: mdi:face-recognition

These are the settings we use to work with deepstack

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.47
    port: 5100
    timeout: 10000
    detect_only: False 
    save_file_folder: /config/www/deepstack/snapshots/
    save_timestamped_file: False
    save_faces: False
    save_faces_folder: /config/www/deepstack/faces/
    show_boxes: False
    source:
# Connecting the camera Eufy Indoor Cam 2K Pan & Tilt
      - entity_id: camera.eufy_camera_hall
        name: detect_face_eufy_camera  #( this is the name of my camera. As we call it here, it will be called, so I have it called image_processing.detect_face_ugon_camera)

Explanation of options
detect_only: Configuring detect_only = True results in faster processing than recognition mode, but any trained faces will not be listed in the matched_faces attribute
save_timestamped_file: Whether to save a photo with the last date of the snapshot to the folder-true\false
save_faces: Whether to save faces to a folder - true\false
show_boxes: Enable/disable the selection of the face with a red frame - true\false

Deepstack Training Recommendation
For deepstack face recognition to work correctly, I use photos from different angles, this will allow me to recognize a person’s faces more correctly. If you upload photos that look straight, then there will often be erroneous recognition when viewing from an angle. It is also necessary to use confidence, the higher the confidence, the more correct it will be to identify the person. The best option for face recognition is a confidence of at least 70. So I created

Текст на русском (Text in Russian)

Я использую packages.
Читаем статью Удобная настройка (конфигурация) Home Assistant.
Прописывается это в configuration.yaml следующим образом

homeassistant:
  packages: !include_dir_named includes/packages # Размещаем любые конфигурации в папке packages

Как сделать сенсор c атрибутами?

Про создание сенсора с атрибутами читаем блог Ивана Бессарабова

Создание template сенсора с атрибутами

Можно создать сенсор и управлять не только содержимым стейта, но и тем что содержится в атрибутах. Вот пример:

input_text:
  t_state:
  t_attribute:

sensor:
  - platform: template
    sensors:
      my:
        value_template: "{{ states('input_text.t_state') }}"
        attribute_templates:
          a: "{{ states('input_text.t_attribute') }}"

В папке packages создаем файлик с любым именем, я назвал face_identify_deepstack_sensor.yaml

В файлик face_identify_deepstack_sensor.yaml вставляем код, где image_processing.detect_face_eufy_camera это имя моей камеры (конфиг deepstack ниже) и еще я использую вспомогательный элемент число input_number.deepstack_confidence_face. Оно позволяет регулировать уверенность распознавания лица через lovelace. Вам вместо image_processing.detect_face_eufy_camera нужно указывать имя своей камеры

sensor:
  - platform: template
    sensors:
      persons_names:
        friendly_name: 'Имена идентифицированных лиц'
        icon_template: mdi:face-recognition
        value_template: >
          {% set detect_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
          {% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
          {% set set_confidence = states('input_number.deepstack_confidence_face')%}
          {% if detect_face and confidence_face >= set_confidence %}
          {{ state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') }}
          {% else %}
          unknown
          {% endif %}
        attribute_templates:
          faces: "{{ state_attr('image_processing.detect_face_eufy_camera','faces') }}"
          total_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','total_faces') }}"
          total_matched_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','total_matched_faces') }}"
          matched_faces: "{{ state_attr('image_processing.detect_face_eufy_camera','matched_faces') }}"
          last_detection: "{{ state_attr('image_processing.detect_face_eufy_camera','last_detection') }}"
          friendly_name: "{{ state_attr('image_processing.detect_face_eufy_camera','friendly_name') }}"
          device_class: "{{ state_attr('image_processing.detect_face_eufy_camera','device_class') }}"

Создаем вспомогательный элемент число input_number.deepstack_confidence_face. Если делать через GUI

Если делать через через yaml. В файлик input_number.yaml

deepstack_confidence_face:
  name: "Deepstack: Уверенность распознавания лица"
  min: 40
  max: 100
  step: 1
  mode: slider
  icon: mdi:face-recognition

Вот эти настройки мы используем для работы deepstack

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.47
    port: 5100
    timeout: 10000
    detect_only: False 
    save_file_folder: /config/www/deepstack/snapshots/
    save_timestamped_file: False
    save_faces: False
    save_faces_folder: /config/www/deepstack/faces/
    show_boxes: False
    source:
# Подключаем камеру Eufy Indoor Cam 2K Pan & Tilt
      - entity_id: camera.eufy_camera_hall
        name: detect_face_eufy_camera  #( это имя нашей камеры, как мы тут именуем, так и будет называться, поэтому у меня оно именуется image_processing.detect_face_eufy_camera)

Пояснение по опциям
detect_only: Настройка detect_only = True приводит к более быстрой обработке, чем режим распознавания, но любые обученные лица не будут перечислены в атрибуте matched_faces
save_timestamped_file: Сохранять ли в папку фото с последней датой снимка - да\нет
save_faces: Сохранять ли лица в папку - да\нет
show_boxes: Включить\выключить выделение лица красной рамочкой - да\нет

Рекомендация по обучению deepstack
Для правильной работы распознавания лиц deepstack я использую фото с разными углами, это позволит более правильно распознавать лица человека. Если загружать фото которые смотрят прямо, то при угловом обзоре часто будет ошибочное распознавание. Также обязательно нужно использовать уверенность, чем выше уверенность, тем правильнее будет опознавать лицо. Самый оптимальный вариант для распознавания лица, это уверенность не ниже 70. Поэтому я создал

4 Likes

Good day all.
I’m trying to get deepstack object and face and UI running with home assistant.
When installing it on ubuntu in docker it works fine and it detects and recognises faces.
As soon as I close the SSH connection or restart the host, deepstack object and face containers stop running and I can’t start them anymore.Deepstack UI keeps working.

Error message in pertained when trying to start:
“Failure - starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24.

Any ideas what the reason could be?

wow, just wow.

Thank you in the name of the community!!

Source: docker - клиент новее сервера (версия API клиента: 1.24, версия API сервера: 1.21) - Question-It.com

Docker runs on a client / server model, each release of the Docker Engine has a specific version of the API.

The combination of the release version and the Docker API version looks like this:

https://docs.docker.com/engine/api/v1.26/#section/Versioning

According to the table above, Docker API v1.24 is used in Docker Engine 1.12.x, and Docker API v1.21 is used in Docker Engine 1.9. x.The server requires an API version equal to or later than the client.

You have the following three options.

  1. Upgrade the server part to Docker Engine 1.12. x or higher.
  2. Downgrade the client part to Engine 1.9. x or lower.
  3. Downgrade the API version used at runtime by exporting ’ DOCKER_API_VERSION=1.21` to a client-side environment variable.
Текст на русском (Text in Russian )

Источник: docker - клиент новее сервера (версия API клиента: 1.24, версия API сервера: 1.21) - Question-It.com

Docker работает на модели клиент / сервер, каждый выпуск Docker Engine имеет определенную версию API.

Комбинация версии выпуска и версии API Docker выглядит следующим образом:

Docker Engine API v1.26 Reference

Согласно приведенной выше таблице, Docker API v1.24 используется в Docker Engine 1.12.x, а Docker API v1.21 используется в Docker Engine 1.9.x. Серверу требуется версия API, равная или более поздняя, чем у клиента.

У вас есть следующие три варианта.

  1. Обновите серверную часть до Docker Engine 1.12.x или выше.
  2. Понизьте клиентскую часть до Engine 1.9.x или ниже.
  3. Понизьте версию API, используемую во время выполнения, экспортировав DOCKER_API_VERSION=1.21 в переменную среды на стороне клиента.

@DivanX10 , do you happen to know the answer to this question? Or what is your experience with the trainer you recommended, which btw looks very promising!

I tried before with image training with many methods, and I’ve came to the realisation that the key to successfully teaching a person’s images is to upload MANY pictures in one batch, otherwise only the LAST image uploaded (sent to deepstack) will be the latest and ONLY reference image. Did you notice that is it the same with this trainer as well, or you can slowly upload images one by one over a period of time as it accumulates more images of a given person?

I uploaded several photos at once and in parts at different times. I didn’t notice any difference in the recognition operation. It also does not depend on the client here. It depends more on the angle of the photos and the clarity of the image from the camera. If the camera takes a clear picture, without blurring and without squares, then the recognition confidence is higher than 70%, if the picture is not very good, then recognition is 50-65% and can often be wrong. It also correctly recognizes if you look at the camera without moving and it works even if you look at the camera from an angle. For integration, I use Agent DVR, where pictures are taken in real time, aka WEBRTC, and the pictures are not quite perfect, but good, which gives + for correct recognition. I also use filters to exclude false identification, without filters, it is often mistakenly identified.

What filters do I use?

  1. I set the confidence at 70%
  2. In automation, I use the condition, if the entrance door is closed, then ignore it, if it is identified as unknown, if the entrance door is open and people come in, then I allow you to send unknown
Текст на русском (Text in Russian)

Я загружал и сразу несколько фоток и частями в разное время. Разницу в работе распознавания я не заметил. От клиента тут тоже не зависит. Тут больше зависит от ракурса фоток и четкости снимка от камеры. Если камера делает четкий снимок, без размытии и без квадратиков, то уверенность распознавания выше 70%, если снимок не очень получился, то распознание составляет 50-65% и часто может ошибаться. Также правильно распознает, если смотреть на камеру не подвижно и это работает, даже если смотреть на камеру под углом. Для интеграции я использую Agent DVR, там снимки делаются в реальном времени, он же WEBRTC и снимки получаются не совсем идеальными, но хорошими, что дает + для правильного распознавания. Также я использую фильтры для исключения ложных опознавании, без фильтров часто опознается ошибочно.

Какие фильтры я использую?

  1. Выставляю уверенность в 70%
  2. В автоматизации использую условие, если входная дверь закрыта, то игнорить, если опознано как unknown, если входная дверь открыта и заходят люди, то разрешаю присылать unknown
2 Likes

Thanks for this info, I’ll try to solve it!
It’s a good starting point for me :+1:t2:

Dear @DivanX10

really would not like to waste your time, but if you can, plase help me. I think I get where you’re going with this adjustable confidence treshold thingy, but unfortunately I have a slightly simpler method, and since I’m terrible with value templating, I can hardly extract the needed information from your help. If you can assist me a little bit on how to properly syntax my value template for my use case, I would praise your name.

My approach in a nutshell:

  • I run Blue Iris NVR with deepstack integrated
  • based on AI confirmed person detection I trigger a virtual “person is moving before the gate” sensor in HA
  • when this virtual sensor gets triggered, I run an automation which calls the deepstack integration in HA to run a face recognition, save a snapshot of the result, and send a notification to my ios companion app
  • the automation is condition splitted, based on the deepstack integration’s sensor states. And currently I desperately miss the filtering possibility on confidence level, this is what I need to solve!

My goal with the notifications:

  • if it is a known, recognized person, above a certain confidence level, then include the name in the notifiiciations, later I’d like to trigger other automations based on the person (for instance open garden gate to someone I trust), don’t know yet how to solve it yet :slight_smile:
  • if it is an unknown person, then send a critical alert to my phone (so it passes through possible muted state of my phone) saying unknown person is standing before the gate.
  • I just put a default branch as well if all conditions fail somehow, but the AI confirmed alert from blue iris still triggers my “motion sensor”, then it just simply sends a camera stream to my phone so I can check what the heck is going on.

Here’s my automation in it’s current form, and I desperately need to take into account the confidence level of the recognition, because many times faces are recognized as MY face, and the deepstack sensor name state attribute GETS my name in it, just with a very low confidence level. So even if a known person’s name is received, but BELOW the needed confidence (let’s say 70), then the UNKNOWN PERSON branch should be running ot the automation. I hope I explained it clearly :slight_smile:

alias: Face recognition notification test
description: ''
trigger:
  - platform: state
    entity_id: binary_sensor.street_person_motiondetection
    to: 'on'
condition: []
action:
  - service: image_processing.scan
    data: {}
    target:
      entity_id: image_processing.face_counter
  - delay: '00:00:01'
  - event: image_processing.detect_face
    event_data:
      entity_id: image_processing.face_counter
  - delay: '00:00:01'
  - choose:
      - conditions:
          - condition: template
            value_template: >-
              {{ 'Zsolt' in
              state_attr('image_processing.face_counter','faces').0.name }} 
        sequence:
          - service: notify.mobile_app_zsolt_iphone_12_pro
            data:
              message: 'Confidence of recognition: ' # here I would need to extraxt confidence level from sensor state with a value template
              title: Zsolt is at the front gate!
              data:
                attachment:
                  url: >-
                    https://XXXXXXXXXXXXXX/local/deepstackrecog/face_counter_latest.jpg
                  content-type: jpeg
                  hide-thumbnail: false
      - conditions:
          - condition: template
            value_template: >-
              {{ 'unknown' in
              state_attr('image_processing.face_counter','faces').0.name }}
        sequence:
          - service: notify.mobile_app_zsolt_iphone_12_pro
            data:
              title: Unknown person at the gate!
              message: DepStack AI couldn't recognize the person standing in front of the gate.
              data:
                attachment:
                  url: https://XXXXXXXXXXXXXX/local/deepstackrecog/face_counter_latest.jpg
                  content-type: jpeg
                  hide-thumbnail: false
    default:
      - service: notify.mobile_app_zsolt_iphone_12_pro
        data:
          message: Someone's at the front gate!
          title: There's movement in front of the gate
          data:
            push:
              category: camera
            entity_id: camera.blueiris_street
mode: single

oh, and btw, the instructional links you provided prooved to be a big rabbit hole to me, now I’m hunting after a good energy meter to be able to measure my house’s consumption against the power output of my solar panels :smiley:

Try to do this if you need to select names from identified persons and you can also set confidence for each person. You can even output it to the sensor

You can check this code in the template itself, and then it is not difficult to implement your wishes, for example, to create a sensor of recognized names or use it as a condition for automation

{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in 'Brad' and confidence_face > '60'  %} That's Right, this is Nonsense
{% elif names in 'Brad' and confidence_face < '59' %} A mistake, this is not Nonsense
{% elif names in 'Joly' and confidence_face > '60' %} That's right, this is Jolie
{% elif names in 'Joly' and confidence_face < '59' %} A mistake, it's not Jolie
{% elif names and 'Joly' and 'Brad' and confidence_face > '50' %} That's right, it's Brad and Jolie
{% elif names and 'Joly' and 'Brad' and confidence_face < '40' %} A mistake, it's not Jolie or Brad
{% endif %}

Текст на русском (Text in Russian)

Попробуйте сделать так, если вам надо выделять имена из опознанных лиц и также можете задать уверенность для каждого лица. Можете даже вывести это в сенсор

Это код можете проверить в самом шаблоне, а дальше уже не сложно реализовать свои хотелки, к примеру создать сенсор опознанных имен или использовать в качестве условии для автоматизации

{% set names = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','name')| map(attribute='name') | join(', ') %}
{% set confidence_face = state_attr('image_processing.detect_face_eufy_camera','faces') | selectattr('faces','!=','confidence')| map(attribute='confidence') | join(', ') %}
{% if names in 'Brad' and confidence_face > '60'  %} Верно, это Брэд
{% elif names in 'Brad' and confidence_face < '59' %} Ошибка, это не Брэд
{% elif names in 'Joly' and confidence_face > '60' %} Верно, это Джоли
{% elif names in 'Joly' and confidence_face < '59' %} Ошибка, это не Джоли
{% elif names and 'Joly' and 'Brad' and confidence_face > '50' %} Верно, это Бред и Джоли
{% elif names and 'Joly' and 'Brad' and confidence_face < '40' %} Ошибка, это не Джоли и не Брэд
{% endif %}
1 Like

How do I configure Agent DVR to work with Deepstack?
How can I display a list of identified persons without brackets?

  1. Go to the general settings of Agent DVR

  2. Choosing intelligence

  3. We specify the IP and port of the deepstack of our server and save it

  4. Go to the camera settings

  5. Click edit camera

  6. Choosing actions

  7. Adding the actions we need

  8. Creating an action for face recognition. We will transmit to HA via MQTT. You can create any topic. It is important to correctly specify the topics when creating an MQTT sensor.

  9. Creating an action for face recognition. When the movement stops, the status will be turned off. If this is not done, the name of the identified person will remain. You don’t have to use this action if you don’t want the name to disappear when the movement stops

  10. Creating a motion detection sensor. This will be useful for us in order to launch the Deepstack Face integration for face recognition in HA

  11. Creating a sensor for stopping motion detection.

  12. Open the"Face Recognition" section

  13. In the face recognition settings, select detect, specify confidence and enable the face recognition function

  14. Open the "Tracking device"section

  15. We specify to fix only objects. If we specify a simple mode, then everything will be recorded, even turning on and off lights and animals. Objects capture only what we specified in the objects filters, it can only be people or people and a dog or a car. Recognizes only the specified object from the filter.

  16. Open the "Objects"section

  17. In this “Objects” section, we specify which objects the camera should respond to.

  18. Then we need to apply the saved parameters. You can also restart the Agent DVR

  19. Reloading the Agent DVR

We create sensors for Agent DVR and output them to Lovelace

# Sensors via MQTT
sensor:
  - platform: mqtt
    name: "agentdvr eufy camera hall motion"
    state_topic: "Agent/cameras/eufy camera hall/motion"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall face detected"
    state_topic: "Agent/cameras/eufy camera hall/face"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall motion detected"
    state_topic: "Agent/cameras/eufy camera hall/motion detected"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall object detected"
    state_topic: "Agent/cameras/eufy camera hall/object detected"
    icon: mdi:webcam

This is how the panel and the sensor itself look in the developer panel
image
image

How can I add identified persons to the list without brackets?
image

A visual example in the template. Without brackets and with brackets

If we use this code, a list without brackets will be displayed

{% for k, v in state_attr("image_processing.detect_face_eufy_camera", "matched_faces").items() -%}
{{ k }}: {{ v }}
{% endfor %}

This code will display a list with brackets
{{ state_attr("image_processing.detect_face_eufy_camera", "matched_faces") }}

Текст на русском (Text in Russian)

Как настроить Agent DVR для работы с Deepstack?
Как вывести список опознанных лиц без скобок?

  1. Заходим в общие настройки Agent DVR

  2. Выбираем интеллект

  3. Указываем IP и порт deepstack своего сервера и сохраняемся

  4. Заходим в настройки камеры

  5. Нажимаем редактировать камеру

  6. Выбираем действия

  7. Добавляем нужные нам действия

  8. Создаем действие для распознавание лица. Передавать в ХА будем через MQTT. Топик можно создать любой. Важно при создании MQTT сенсора верно указать топики.

  9. Создаем действие для распознавания лица. При прекращении движения, статус будет выключен. Если этого не сделать, то имя опознанного лица останется. Можете не использовать это действие, если не хотите, чтобы имя исчезало при прекращении движения

  10. Создаем сенсор обнаружения движения. Это нам пригодится для того, чтобы запускать в ХА интеграцию Deepstack Face для распознавания лица

  11. Создаем сенсор прекращения обнаружения движения.

  12. Открываем раздел “Распознавание лиц”

  13. В настройках распознавание лиц выбираем детектировать, указываем уверенность и включаем функцию распознавание лица

  14. Открываем раздел “Следящее устройство”

  15. Указываем фиксировать только объекты. Если укажем простой режим, то фиксировать будет все, даже включение и выключение света и животных. Объекты фиксируют только то, что мы указали в фильтры объекты, это может быть только люди или люди и собака или машина. Распознает только указанный объект из фильтра.

  16. Открываем раздел “Объекты”

  17. В этом разделе “Объекты” мы указываем на какие объекты камера должна реагировать.

  18. Потом нам надо применить сохраненные параметры. Можно еще и перезапустить Agent DVR

  19. Перезагружаем Agent DVR

Создаем сенсоры для Agent DVR и выводим их в Lovelace

# Сенсоры через MQTT
sensor:
  - platform: mqtt
    name: "agentdvr eufy camera hall motion"
    state_topic: "Agent/cameras/eufy camera hall/motion"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall face detected"
    state_topic: "Agent/cameras/eufy camera hall/face"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall motion detected"
    state_topic: "Agent/cameras/eufy camera hall/motion detected"
    icon: mdi:webcam

  - platform: mqtt
    name: "agentdvr eufy camera hall object detected"
    state_topic: "Agent/cameras/eufy camera hall/object detected"
    icon: mdi:webcam

Вот так выглядит панель и сам сенсор в панели разработчика
image
image

Как вывести опознанные лица в список без скобок?
image

Наглядный пример в шаблоне. Без скобок и со скобками

Если будем использовать этот код, то отобразится список без скобок

{% for k, v in state_attr("image_processing.detect_face_eufy_camera", "matched_faces").items() -%}
{{ k }}: {{ v }}
{% endfor %}

Этот код отобразится список со скобками
{{ state_attr("image_processing.detect_face_eufy_camera", "matched_faces") }}

I am in a hurry to tell you the good news. Maybe it will help you. A few days earlier, several people encountered a similar problem. Only object recognition worked for them, but there was no face recognition and an error occurred

Deepstack Exception: Timeout connecting to Deepstack, the current timeout is 30 seconds, try increasing this value

The problem turned out to be that some people put the image of docker deep quest ai/deepstack:cpu-2021.06.1 on hardware with an Intel Celeron J3455 processor. They write about the problem itself here

In order for Deepstack to work on a machine with an Intel Celeron J3455 processor, you need to deploy the docker deepquestai/deepstack:cpu-x5-beta image, you can try the cpu-x6-beta image, but you haven’t tried it with it

image docker deepquestai/deepstack:cpu-x5-beta download here
image docker deepquestai/deepstack:cpu-x6-beta download here

Текст на русском (Text in Russian)

Спешу сообщить хорошую новость. Может оно вам поможет. Несколькими днями ранее несколько людей столкнулось с подобной проблемой. У них работало только распознавание объектов, а распознавание лиц нет и возникала ошибка:

Deepstack Exception: Timeout connecting to Deepstack, the current timeout is 30 seconds, try increasing this value

Проблема оказалась в том, что некоторые ставят образ docker deepquestai/deepstack:cpu-2021.06.1 на железо с процессором Intel Celeron J3455. Про саму проблему пишут тут

Для того, чтобы Deepstack заработал на машине с процессором Intel Celeron J3455, надо развернуть образ docker deepquestai/deepstack:cpu-x5-beta, можно попробовать образ cpu-x6-beta, но с ним не пробовали

образ docker deepquestai/deepstack:cpu-x5-beta скачать здесь
образ docker deepquestai/deepstack:cpu-x6-beta скачать здесь

1 Like

@DivanX10 , and what about Unknown? where there’s absolutely no face is recognized, and it gives back unknown? How do I extract that?

Many many thanks

Deepstack does not know how to identify a person as unknown, it shows the names of those photos that are uploaded to the deepstack server via the deepstack client. For this, you additionally need to use confidence. The higher the confidence, the more accurately the person will determine. For example, the camera captures someone else’s face and this person can be assigned names that are in the database and deepstack will not show that this person is unknown. Confidence in such individuals usually varies in the range of 50-65. An unknown person can be caught like this. We set the confidence to 70 and if the confidence of the identified person is below 70, then the sensor will have the status “unknown” and trigger this in the automation, where the photos will arrive.

Текст на русском (Text in Russian)

Deepstack не умеет определять лицо как неизвестное, он показывает имена тех фото, которые загружены в сервер deepstack через клиент deepstack. Для это дополнительно нужно использовать уверенность. Чем выше уверенность, тем точнее определит лицо. Например камера фиксирует чужое лицо и этому лицу могут быть присвоены имена, которые есть в базе и deepstack не будет показывать, что это лицо неизвестное. Уверенность у таких лиц обычно варьируется в диапазоне 50-65. Неизвестное лицо можно ловить так. Выставляем уверенность 70 и если уверенность у опознанного лица будет ниже 70, то в сенсоре будет статус “неизвестный” и триггерить это в автоматизации, где будут прилетать фото.

@DivanX10. It very great. Face recognition work with image docker deepquestai/deepstack:cpu-x5-beta.

Thank !

1 Like

i’m having some issues with communicating with deepstack - both inside and outside home assistant

have installed deepstack as a separate container using docker compose on a VM

but can’t seem to use it to scan images or train it

seems to be up and running ok, but it’s not receiving anything from HA when i try to train/scan

this is my docker-compose.yml for deepstack:

version: "3.7"
services:
  deepstack:
    image: deepquestai/deepstack:latest
    restart: unless-stopped
    container_name: deepstack
    ports:
      - "5001:5000"
    environment:
      - TZ=Europe/Oslo
      - VISION-FACE=True
      - VISION-DETECTION=False
      - VISION-SCENE=False
    volumes:
      - ./deepstack:/datastore

and my config entry for the HA config.yaml:

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.24
    port: 5001
    timeout: 5
    detect_only: False
    save_file_folder: /media/deepstack/snapshots/
    save_timestamped_file: True
    save_faces: True
    save_faces_folder: /media/deepstack/faces/
    show_boxes: True
    source:
      - entity_id: camera.front_door_cam
        name: Front door

when i try to train it via DevTools>Services using this:

It doesn’t seem to error out but then nothing shows up in the Deepstack console. Just remains like this:

Not entirely sure what I’m doing wrong but imagine it’s probably something obvious that I’ve missed…

For reference i have Frigate running on the same VM as Deepstack and that has no issues communicating with Home Assistant.

Install this deepstack client. It is much more convenient to use it + there is a gallery of trained images.

thanks for the link! i actually tried this earlier based on your posts further up this thread but it seems to give me the same problems. nothing showing up in the Deepstack console

You specify the media folder in the config, try doing this. Specify the path to the config folder

image_processing:
  - platform: deepstack_face
    ip_address: 192.168.1.50
    port: 5500
    timeout: 10000
    detect_only: False 
    save_file_folder: /config/www/deepstack/snapshots/
    save_timestamped_file: False
    save_faces: False
    save_faces_folder: /config/www/deepstack/faces/
    show_boxes: False
    source:
# Connecting the camera
      - entity_id: camera.ip_camera
        name: detect_face