Face and person detection with Deepstack - local and free!

Thanks for adding support for face recognition. Will try it out soon. By the way, it says I can use multiple pictures to teach a single face. But I couldn’t find any example format for this. Any pointer?

1 Like

can you show an exampke of the path you used to teach?

1 Like

I’m on mac and use /Users/robincole/.homeassistant/images/img.jpg, and likewise I don’t get any errors with .homeassistant/images/superman_1.jpeg but nothing happens. I don’t understand why this is the case, as the component checks that a file path is valid. I suggest you use absolute paths, and also add them to your whitelist, I have:

    - /Users/robincole
    - /Users/robincole/.homeassistant/images/
1 Like

Good call. I’ll try that, thanks

1 Like

This sounds cool

Do you have to teach it to recognise the back of someone’s head to get presence detection working when someone leaves the house? :wink:


My HA is on Synology Docker. I also installed this on my Synology Docker.

When I try to run the image_processing.deepstack_teach_face service in HA with this service data…

  "name": "Ben",
  "file_path": "/config/faces/Ben.jpg"

…I received this error in the log:

2019-01-22 23:24:22 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection.140244849846424] Error handling message: {'type': 'call_service', 'domain': 'image_processing', 'service': 'deepstack_teach_face', 'service_data': {'name': 'Ben', 'file_path': '/config/faces/Ben.jpg'}, 'id': 20}
Traceback (most recent call last):
  File "/usr/src/app/homeassistant/components/websocket_api/decorators.py", line 17, in _handle_async_response
    await func(hass, connection, msg)
  File "/usr/src/app/homeassistant/components/websocket_api/commands.py", line 148, in handle_call_service
  File "/usr/src/app/homeassistant/core.py", line 1121, in async_call
    self._execute_service(handler, service_call))
  File "/usr/src/app/homeassistant/core.py", line 1145, in _execute_service
    await self._hass.async_add_executor_job(handler.func, service_call)
  File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/image_processing/deepstack_face.py", line 133, in service_handle
    classifier.teach(name, file_path)
  File "/config/custom_components/image_processing/deepstack_face.py", line 181, in teach
    self._url_register, name, file_path)
  File "/config/custom_components/image_processing/deepstack_face.py", line 88, in register_face
    _LOGGER.error("%s error : %s", CLASSIFIER, response.json())
  File "/usr/local/lib/python3.6/site-packages/requests/models.py", line 897, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/simplejson/__init__.py", line 518, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python3.6/site-packages/simplejson/decoder.py", line 370, in decode
    obj, end = self.raw_decode(s)
  File "/usr/local/lib/python3.6/site-packages/simplejson/decoder.py", line 400, in raw_decode
    return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

I have this in config…

    - /config/tmp
    - /config/faces

@masterkenobi Looks like the deepstack container is not available…?

Did you sort the file path? Did you manage to get it to work? I have the same issue as you and the same set up.

No I never did

hi everyone. i managed to get it up and running. i think the issue is with the port number because port 5xxx are all related to Synology ports. All I need to do to fix it is by using a different port number as such…

I also managed to teach my face using Deepstack. However, I am not so sure how to teach using multiple photos. Do I run the image_processing.deepstack_teach_face service multiple times using different photos?

Also, I am wondering how do I exclude certain area in the camera image like in Tensorflow?

@masterkenobi OK good to know it was just a port conflict. And yes, each image requires a service call, I don’t yet have a script for batch training. To pre-process an image use https://www.home-assistant.io/components/camera.proxy/

1 Like

thanks for the pointers.

may i know will the face recognition quality improve if I use more samples for each faces?

1 Like


I have this automation. it works for Facebox component. But it doesn’t work anymore for Deepstack.

  - alias: 'Turn on who is at Couch Input Boolean'
    initial_state: on
      platform: event
      event_type: image_processing.detect_face
        entity_id: 'image_processing.face_cam_couch'
      condition: template
      value_template: "{{ trigger.event.data.confidence|float > 50 }}"
      - service: input_boolean.turn_on
          entity_id: >-
            {%- set name = trigger.event.data.name -%}
            {%- if name == 'Ben' -%}
            {%- elif name == 'Leia' -%}
            {%- elif name == 'Luke' -%}
            {%- elif name == 'Hans' -%}
            {%- elif name == 'Chewy' -%}
            {%- endif -%}

My ports are all fine I get the same issues as @Darbos

@masterkenobi deepstack is not firing image_processing.detect_face events (yet)


Ok this is not working for me I am afraid. I give up on this

I have tried this component for 2 days and I found out that while the setup is much simpler than Facebox or Tensorflow, but the most important aspect which is accuracy is lacking. Face recognition is more accurate in Facebox and person detection is also better in Tensorflow. Other than that, I feel that it consumes more CPU power and RAM compare to others.

1 Like

@masterkenobi would be interested to see some of your comparisons, i.e. images which were/weren’t well classified

1 Like

I’m sorry. I’m unable to share the images due to privacy reason. Maybe I can explain a bit further…

For Face Recognition:

  • I have both components (deepstack and facebox) running and the same automation that would trigger the image_processing.scan service. The camera is located below my TV (around 1 meter from the floor) and pointed to my couch. Facebox is more sensitive because it recognizes the faces when Deepstack failed to detect any. Even if Deepstack detected any faces, the confidence value is below 50% whereas Facebox has better confidence value (more than 70%).
  • However the problem with Facebox is it simply tag a person face that never been taught to any of the taught faces which can be quite embarrassing when I have guests.

For person detection:

  • Similar to above, I have both components (Deepstack and Tensorflow) running with the same automation that would trigger the image_processing.scan service. I noticed person detection in Deepstack has more mistakes than Tensorflow. For example, when there is only one person in a room, Deepstack detected 2 persons while Tensorflow detected correctly as 1 person.
1 Like

Deepstack and Facebox are both using state-of-the-art models. You should experiment with the confidence level that triggers ‘detections’

1 Like