Face and person detection with Deepstack - local and free!

Great news. :smiley: :partying_face:

The author of the deepstack client integration has added new features, namely, photo editing in the photo gallery. We can now delete photos, rename the name for the photo.

Download and install the latest version of the deepstack 1.0.2 client

Do you know how to update deepstack-trainer?
I’d already installed 1.01 in docker of synology.
Thank you.

I have discovered a new project DeepStack_ActionNET and it mentions about custom models. Tell me, how can this case be integrated into a home assistant? And how to properly configure custom models in docker. I kind of figured out how to configure it, but there is no clear understanding of how to configure it yet. How to recognize what an object is doing through a home assistant, as in the case of objects?

How I set up custom models in docker, but it was done without proper instructions and it cannot be called an instruction

Created folders /modelstore/detection
$ mkdir -p /modelstore/detection

Stopped the container DeepStack_Server

In the settings of the DeepStack_Server container, in the volume section, I specified the paths for connecting folders
image

And then I specified a variable. Here I do not know whether it is necessary to specify or not. This is not written in the instructions, although it is mentioned for Windows
image

An entry appeared in the log v1/vision/custom/actionnetv2. It turns out that the model is hooked

Текст на русском (Text in Russian)

Я обнаружил новый проект DeepStack_ActionNET и там упоминается про custom models. Скажите, как можно это дело интегрировать в домашний помощник? И как правильно настроить custom models в docker. Я вроде бы как бы понял, как надо настраивать, но пока нет четкого понимания как это надо настраивать. Как распознать, что делает объект через домашний помощник, как в случае с объектами?

Как я настроил custom models в docker, но это сделано без должной инструкции и нельзя это назвать инструкцией

Создал папки /modelstore/detection
$ mkdir -p /modelstore/detection

Остановил контейнер DeepStack_Server

В настройках контейнера DeepStack_Server, в разделе том я указал пути для подключения папок
image

И тут я указал переменную. Тут я не знаю, нужно ли указывать или нет. Об этом не пишется в инструкции, хотя об этом упоминают для Windows
image

В журнале появилась запись v1/vision/custom/actionnetv2. Получается, что модель подцепилась

Trying to setup Deepstack and folder_watcher like motion-activated-image-capture-and-classification-of-birds

If i copy/paste the automation snippets:

- action:
   data_template:
     file_path: ' {{ trigger.event.data.path }} '
   entity_id: camera.dummy
   service: camera.local_file_update_file_path
 alias: Display new image
 condition: []
 id: '1520092824633'
 trigger:
 - event_data:
     event_type: created
   event_type: folder_watcher
   platform: event
- id: '1527837198169'
 alias: Perform image classification
 trigger:
 - entity_id: sensor.last_added_file
   platform: state
 condition: []
 action:
 - data:
     entity_id: camera.dummy
   service: image_processing.scan
- action:
 - data_template:
     message: Class {{ trigger.event.data.id }} with probability {{ trigger.event.data.confidence
       }}
     title: New image classified
     data:
       file: ' {{states.camera.local_file.attributes.file_path}} '
   service: notify.pushbullet
 alias: Send classification
 condition: []
 id: '1120092824611'
 trigger:
 - event_data:
     id: birds
   event_type: image_processing.image_classification
   platform: event

I get the error:

* while parsing a block collection in "/config/automations.yaml", line 1, column 1 expected <block end>, but found '<block mapping start>' in "/config/automations.yaml", line 20, column 2
* mapping values are not allowed here in "/config/automations.yaml", line 26, column 16
* mapping values are not allowed here in "/config/automations.yaml", line 26, column 15
* while parsing a block collection in "/config/automations.yaml", line 1, column 1 expected <block end>, but found '<block mapping start>' in "/config/automations.yaml", line 21, column 2

Any idea?

Seems incorrect spaces(indent) in automations.yaml.
Check project on GitHub - HASS-Machinebox-Classificationbox/bird_project at master · robmarkcole/HASS-Machinebox-Classificationbox · GitHub has correct syntax.

Instructions on how to install the server and client side of DeepStack in the Portainer CE Home Assistant

Why do I need to install Portainer CE and how should I install it?

Instructions in English

This text will be hidden

Installing the DeepStack server

Download the image deepquestai/deepstack
deepquestai/deepstack:gpu – for hardware with a graphics card
deepquestai/deepstack:cpu – for hardware without a graphics card - for the processor

  1. First you need to download the image deepquestai/deepstack:cpu (I choose to use the processor)

  2. Creating a new container for the server DeepStack

  3. When creating a container, specify the following
    • Specify the name of the container
    • Specifying the image for the container
    • Specify the ports. The required port is 5000, and any port can be specified instead of port 5100
    • Start the container so that it creates the primary settings and then turn it off

After the first launch of the container to create settings, the container must be turned off for further configuration

When the container is turned off, click on it and get inside the container, where you need to click on "Duplicate/Edit”


  1. At the very bottom, go to the Env section (digit 1) and add two lines, click on Add an environment variable (digit 2)
    VISION-DETECTION = True
    VISION-FACE = True

  2. After you have specified two lines, you need to apply the parameters and launch the container by clicking the button Deploy the container

  3. Click on Replace, after which the DeepStack server container will start

  4. The Deep Stack_Server container is running

  5. In the browser, enter the IP address and port. In my case http://192.168.1.108:5100
    If everything is done correctly, we will receive a welcome screen saver that the DeepStack server is running and activated. This completes the setup.

We put the client DeepStack (deepstack-trainer)

Download the image techblog/deepstack-trainer

  1. First you need to download the image techblog/deepstack-trainer:1.0.2
    (as of the current date 30.10.2021 this is the latest version of the image)

  2. Create a new container for the client DeepStack

  3. When creating a container, specify the following
    • Specify the name of the container
    • Specifying the image for the techblog/deepstack-trainer container:1.0.2
    • Specify the ports. The required port is 8080, and any port can be specified instead of port 5150
    • Start the container so that it creates the primary settings and then turn it off


  4. At the very bottom, go to the section Env (digit 1), find the line DEEPSTACK_HOST_ADDRESS and add the IP address and port of the Deepstack server (IP address of the Home Assistant). Specify via http:// . In my case it is http://192.168.1.108:5100

  5. Click on Replace, after which we will launch the DeepStack client container

  6. The DeepStack_Client_Trainer container is running

  7. In the browser, enter the IP address and port. In my case http://192.168.1.108:5150
    If everything is done correctly, the DeepStack client page will open. This completes the setup.

Инструкция на русском (Instructions in Russian)

Инструкция, как ставить серверную и клиентскую часть DeepStack в Portainer Home Assistant

Почему нужно ставить Portainer CE и как его надо ставить?

Ставим сервер DeepStack

Скачать образ deepquestai/deepstack
deepquestai/deepstack:gpu – для железа с графической картой
deepquestai/deepstack:cpu – для железа без графической карты - для процессора

  1. Для начала надо скачать образ deepquestai/deepstack:cpu (я выбираю использовать процессор)

  2. Создадим новый контейнер для сервера DeepStack

  3. При создании контейнера указываем следующее
    • Указываем имя контейнера
    • Указываем образ для контейнера
    • Указываем порты. Обязательный порт 5000, а вместо порта 5100 можно указать любой порт
    • Запустите контейнер, чтобы он создал первичные настройки и после этого выключите его

После первого запуска контейнера для создания настроек, контейнер нужно выключить для дальнейшей настройки

Когда контейнер будет выключен, нажмите на него и попадем во внутрь контейнера, где нужно нажать на “Duplicate/Edit”


  1. В самом внизу, переходим в раздел Env (цифра 1) и добавляем две строчки, нажимаем на Add an environment veriable (цифра 2)
    VISION-DETECTION = True (распознавать объекты)
    VISION-FACE = True (распознавать лица)

  2. После того как указали две строчки, необходимо применить параметры и запустить контейнер, нажав на кнопку Deploy the container

  3. Нажимаем на Replace, после чего у нас запустится контейнер сервер DeepStack

  4. Контейнер DeepStack_Server запущен

  5. В браузере вводим IP адрес и порт. В моем случае http://192.168.1.108:5100
    Если все сделано верно, то получим приветственную заставку о том, что сервер DeepStack запущен и активирован. На этом настройка закончена.

Ставим клиент DeepStack (deepstack-trainer)

Скачать образ techblog/deepstack-trainer здесь

  1. Для начала надо скачать образ techblog/deepstack-trainer:1.0.2
    (на текущую дату 30.10.2021 это последняя версия образа)

  2. Создадим новый контейнер для клиента DeepStack

  3. При создании контейнера указываем следующее
    • Указываем имя контейнера
    • Указываем образ для контейнера techblog/deepstack-trainer:1.0.2
    • Указываем порты. Обязательный порт 8080, а вместо порта 5150 можно указать любой порт
    • Запустите контейнер, чтобы он создал первичные настройки и после этого выключите его


  4. В самом внизу, переходим в раздел Env (цифра 1), находим строчку DEEPSTACK_HOST_ADDRESS и добавляем IP адрес и порт сервера Deepstack ( IP адрес Home Assistant). Указывать через http:// . В моем случае это http://192.168.1.108:5100

  5. Нажимаем на Replace, после чего у нас запустится контейнер клиент DeepStack

  6. Контейнер DeepStack_Client_Trainer запущен

  7. В браузере вводим IP адрес и порт. В моем случае http://192.168.1.108:5150
    Если все сделано верно, то откроется страница клиента DeepStack. На этом настройка закончена.

Did anyone figured out how to tied both options together?
I am struggling with this:
(Folder wachter) (Works)
Motion detected > Send image for object recognition (Works)
If summary contains “person” send image for face recognition (No idea)
If summary contains “car” send image for logo recognition (No idea)

Face recognition and logo recognition are working manually
Can someone give me some pointers. I probably need some if else functions.

@DivanX10 you need to configure custom_model arg and the targets

I don’t understand you. Can you show by example how to set it all up correctly in the screenshots to make this custom model work? Here I asked a question and the question remained unanswered. I set it up, an entry appeared in the logs, and then how do I specify everything? There are a lot of questions, but no answers. Please make an instruction with screenshots of how to use custom models.

Текст на русском (Text in Russian)

Не понимаю вас. Можете показать на примере как правильно все это настроить на скриншотах, чтобы заработал этот custom model? Вот тут я задал вопрос и вопрос остался не отвеченным. Я настроил, в логах появилась запись, а дальше как мне указывать то все? Вопросов много, а ответов нет. Пожалуйста сделайте инструкцию со скриншотами как можно использовать пользовательские модели.

1 Like

Questions for you all.

How are you handling triggering the deepstack image processing service?

Previously I had an automation that ran the image processing every 2 seconds but I always felt this was a waste of resources. So set up motion events using MQTT from Blue Iris. This works but seems to be missing stuff. For example in the past as I walked to my detached garage the two second loop would pick me up in the camera and I would get the alert message of positive person detection. I would then get a second one when the car backed into the drive way (same camera). Now using the blue iris motion events I am only getting the second detection and the image is of my car much further out of the garage then previous which leads me to think there is some delay.

Is there a third option I am missing or should I just go back to the 2 second loop?

@DivanX10 and @robmarkcole
I’ve investigated deepstack with object and face detection now for a few days and would like to share my findings:

  1. Hardware:
    a) Jetson Nano 4GB is not sufficient for object and face detection, since deepstack requires cpu power
    b) Intel i7 with GTX 1060 workstation and i7 Notebook with M1200 Graphic performance works fine
    c) VMWare esxi on xeon 5560 cpu with ubuntu 20.04 and docker on an nfs ssd datastore without gpu support is still ok. Object detection below 1sec, face 3sec. Didn’t check on Windows VM yet

Sorry guys, need to apologize. My mistake in defining the start command.
2. Windows vs Docker
a) Windows CPU/GPU on a physical host doesn’t support object, face and scene, but does support custom models
b) Windows CPU on a VM doest work at all (python errors during http request)
c) Docker on Ubuntu does support object, face and scene, but doesn’t support custom models.

  1. Deepstack Versions:
    a) Windows: gpu-2021.09.1, cpu-2021.09.1
    b) Jetson: jetpack-2021.09.1
    c) esxi VM: cpu-x5-beta

I got stuck at the same point like DivanX10 when it’s about to configure “custom_model” arg in order to make the custom models work on docker. Different to his experience I didn’t see my custom model listed in the log when starting the container.

my docker start env:

-e VISION-DETECTION=True -e VISION-FACE=True -e VISION-SCENE=True -e MODELSTORE-DETECTION="/modelstore/detection" -v /mnt/datastore/custom_models:/modelstore/detection

my windows start env : UPDATED

–MODELSTORE-DETECTION “C:\DeepStack\cust_models” --VISION-DETECTION True --VISION-FACE True --VISION-SCENE True

Can someone correct my docker command in a way it will use my custom model?. I tried to find “ARG” in docker docs but couldn’t make it work.
Should object, face and scene work on windows?

tnx

Hi all, I’m kind of new to home assistant so just finding my feet. I would like to run deep stack to allow me to carry out object detection and ideally facial recognition after a few failed break in attempts.
I’m really confused how I install deep stack as I am running Home assistant on a Windows 10 machine, within Oracle VM VirtualBox with a ubuntu (64-bit) operating system.

I have tried to install Deep stack CPU version for windows but it does seem to start up and when i run the image processing in HA it times out. Everything i seem to read shows how to install Deep stack in docker. However I do not have and am not running docker.

Im lost and confused so any guidance would be very very greatly appreciated. Thanks

how do you run de app , dont forget to add the start up param to activate endpoints

image

@manalishi , what you mean with no support in windows, there is native app that runs fine (see screenshot just here above)
I have it running on windows 10 with old gen i5 and 12Gb mem (where most is occupied by MySql and oracle virtual box machine)
on average, responses are below 1 second:
image

give this version a try. works on my ubuntu20.04 VM
deepquestai/deepstack:cpu-x5-beta:cpu-x5-beta

deepstack_object_win10_i7-2600K
it doesn’t load the object detection

I7-2600K 16GB RAM


Custom Models are loaded

@manalishi How do I go around adding that in my VM? This is where I get confused. My apoligies if I sound completely stupid.

  1. Build a Ubuntu20.04 VM
    2.Install Docker
    Installieren und Verwenden von Docker unter Ubuntu 20.04 | DigitalOcean
    Compose V2 | Docker Documentation
  2. Install Portainer (optional)
    docker volume create portainer_data
    docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
  3. Install HA Supervised
    GitHub - home-assistant/supervised-installer: Installer for a generic Linux system
  4. Create a 2nd VM or run Deepstack on a machine with gpu. Frigate consumes a lot of cpu without gpu.

hi all,

deepstack-ui is working very well so deepstack should be working fine, but deepstack_object
and deepstack_face won’t work in homeassistant.

i have done everything like robmarkcole has descriped on github and i watched the great video from everything smart home. but no detections on my image.

i think my problem is when i go in developer tools → service in homeassistant and start the image_processing.scan service manually…nothing happens, no detections, no response, nothing…

it would be great if someone could help me to get it working.

homeassistant supervised and deepstack are running in docker on debian 11 in an vm

edit: i have tried to use my camera and not the camera.local_fle and then it works ^^ but why not with the image filles?

make sure your local file camera is pointing to the config folder or to a directory included in the “media_dirs:” setting