Face and person detection with Deepstack - local and free!

Whats important to appreciate is that there is a treadeoff between model size and accuracy - you can have a more accurate model but they will require more RAM and take longer to process. Also tensorflow lets you use any model you want, so I’m not sure what you would be comparing.
Cheers

2 Likes

Thanks for clarifying. You are right, it depends on the use case. I will give it a try on my hardware and will see if it detects persons faster, and if recognition is more precise in terms of confidence. I definetly like the simple fact that it can be run in a separate docker. So it may be a better fit in my case.
I really apreciate your work and your fantastic components I am using on a daily basis!

2 Likes

@BenDiss and all, I just published v0.4 which adds a service deepstack_teach_face so that you can teach (register) faces with Deepstack from the HA UI. Any feedback let me know

3 Likes

When using the face detection (0.4) i get the following error:

https://github.com/robmarkcole/HASS-Deepstack/issues/5

Hmm strange. Please open an issue on the Github repo

1 Like

Its working for me. Thanks by the way, I have tried most of your great components. This one is nice and easy and fast i dig it. Im trying to teach it some faces but not having any luck, its probably the path.

I run HA in docker on ubuntu, i have a file located here ‘/home/homeassistant/.homeassistant/www/d2.jpg’

‘.homeasssitant’ is my config folder according to docker, so when i call files in other automations or scripts ‘/local/example.png’ works fine.

Examples:

{
  "name": "Batman",
  "file_path": "/local/d2.jpg"
}
{
  "name": "Batman",
  "file_path": "http://ipaddress:port/local/d2.jpg"  ##Ive tested and this does pull up the image in a browser.
}
{
  "name": "Batman",
  "file_path": "/home/homeasssitant/.homeassistant/d2.jpg"
}

I dont see anything in the logs about it either.

Any help is appreciated!

1 Like

Thanks for adding support for face recognition. Will try it out soon. By the way, it says I can use multiple pictures to teach a single face. But I couldn’t find any example format for this. Any pointer?

1 Like

can you show an exampke of the path you used to teach?

1 Like

I’m on mac and use /Users/robincole/.homeassistant/images/img.jpg, and likewise I don’t get any errors with .homeassistant/images/superman_1.jpeg but nothing happens. I don’t understand why this is the case, as the component checks that a file path is valid. I suggest you use absolute paths, and also add them to your whitelist, I have:

  whitelist_external_dirs:
    - /Users/robincole
    - /Users/robincole/.homeassistant/images/
1 Like

Good call. I’ll try that, thanks

1 Like

This sounds cool

Do you have to teach it to recognise the back of someone’s head to get presence detection working when someone leaves the house? :wink:

3 Likes

My HA is on Synology Docker. I also installed this on my Synology Docker.

When I try to run the image_processing.deepstack_teach_face service in HA with this service data…

{
  "name": "Ben",
  "file_path": "/config/faces/Ben.jpg"
}

…I received this error in the log:

2019-01-22 23:24:22 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection.140244849846424] Error handling message: {'type': 'call_service', 'domain': 'image_processing', 'service': 'deepstack_teach_face', 'service_data': {'name': 'Ben', 'file_path': '/config/faces/Ben.jpg'}, 'id': 20}
Traceback (most recent call last):
  File "/usr/src/app/homeassistant/components/websocket_api/decorators.py", line 17, in _handle_async_response
    await func(hass, connection, msg)
  File "/usr/src/app/homeassistant/components/websocket_api/commands.py", line 148, in handle_call_service
    connection.context(msg))
  File "/usr/src/app/homeassistant/core.py", line 1121, in async_call
    self._execute_service(handler, service_call))
  File "/usr/src/app/homeassistant/core.py", line 1145, in _execute_service
    await self._hass.async_add_executor_job(handler.func, service_call)
  File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/image_processing/deepstack_face.py", line 133, in service_handle
    classifier.teach(name, file_path)
  File "/config/custom_components/image_processing/deepstack_face.py", line 181, in teach
    self._url_register, name, file_path)
  File "/config/custom_components/image_processing/deepstack_face.py", line 88, in register_face
    _LOGGER.error("%s error : %s", CLASSIFIER, response.json())
  File "/usr/local/lib/python3.6/site-packages/requests/models.py", line 897, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/simplejson/__init__.py", line 518, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python3.6/site-packages/simplejson/decoder.py", line 370, in decode
    obj, end = self.raw_decode(s)
  File "/usr/local/lib/python3.6/site-packages/simplejson/decoder.py", line 400, in raw_decode
    return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

I have this in config…

 whitelist_external_dirs:
    - /config/tmp
    - /config/faces

@masterkenobi Looks like the deepstack container is not available…?

Did you sort the file path? Did you manage to get it to work? I have the same issue as you and the same set up.

No I never did

hi everyone. i managed to get it up and running. i think the issue is with the port number because port 5xxx are all related to Synology ports. All I need to do to fix it is by using a different port number as such…

I also managed to teach my face using Deepstack. However, I am not so sure how to teach using multiple photos. Do I run the image_processing.deepstack_teach_face service multiple times using different photos?

Also, I am wondering how do I exclude certain area in the camera image like in Tensorflow?

@masterkenobi OK good to know it was just a port conflict. And yes, each image requires a service call, I don’t yet have a script for batch training. To pre-process an image use https://www.home-assistant.io/components/camera.proxy/

1 Like

thanks for the pointers.

may i know will the face recognition quality improve if I use more samples for each faces?

1 Like

@robmarkcole,

I have this automation. it works for Facebox component. But it doesn’t work anymore for Deepstack.

  - alias: 'Turn on who is at Couch Input Boolean'
    initial_state: on
    trigger:
      platform: event
      event_type: image_processing.detect_face
      event_data:
        entity_id: 'image_processing.face_cam_couch'
    condition:
      condition: template
      value_template: "{{ trigger.event.data.confidence|float > 50 }}"
    action:
      - service: input_boolean.turn_on
        data_template:
          entity_id: >-
            {%- set name = trigger.event.data.name -%}
            {%- if name == 'Ben' -%}
              input_boolean.ben_at_couch
            {%- elif name == 'Leia' -%}
              input_boolean.leia_at_couch
            {%- elif name == 'Luke' -%}
              input_boolean.luke_at_couch
            {%- elif name == 'Hans' -%}
              input_boolean.hans_at_couch
            {%- elif name == 'Chewy' -%}
              input_boolean.chewy_at_couch
            {%- endif -%}

My ports are all fine I get the same issues as @Darbos