Face detection with Docker/Machinebox

That’s correct Robin, the sensor.facebox_detection will only show a value (name) when the automation is triggered, the snapshot taken and facebox recognises the person. If it doesn’t recognise it, it returns unknown.

1 Like

Hello all, i had stability issues with my synology ds218+ 2 gb ram running facebox. Rob pointed out that is had to upgrade the memory, so i did added 8 gb to 10 gb (officialy not supported, but working perfectly, can upgrade to 16 in time). So my stability issues are over. And facedetection working, love it. I made a sensor with the recognized name in it, confidence, and ‘matched’ true added in configuration.yaml (see underneeth my post). One thing not working yet is i need to use it as a binary sensor. (since i just came from domoticz) i can not get this simple thing working. Is there an easy way to add a binairy sensor so i can easily trigger an event like my danalock (i will only use this for my kids for a few hours a day- safety first). FYI i checked github and Hackster, but adjusting the script at the buttom opening my door is way above my paygrade :wink:

Thanks,

Parsing name, id, and confidence:

sensor:

  • platform: template
    sensors:
    facebox_detection:
    friendly_name: ‘person at frontdoor’
    value_template: “{{ states.image_processing.facebox_live_view.attributes.faces[0][‘name’] }}”

    name_id:
    friendly_name: ‘name id’
    value_template: “{{ states.image_processing.facebox_live_view.attributes.faces[0][‘id’] }}”

    matched:
    friendly_name: ‘matched’
    value_template: “{{ states.image_processing.facebox_live_view.attributes.faces[0][‘matched’] }}”

    predictability:
    friendly_name: ‘predictability’
    unit_of_measurement: ‘%’
    value_template: “{{ states.image_processing.facebox_live_view.attributes.faces[0][‘confidence’] | multiply(100) | float | round }}”

voice announcement:

  • id: facebox_announcement
    alias: ‘Facebox Announcement’
    initial_state: on
    trigger:
    • platform: state
      entity_id: binary_sensor.entrance_motion
      to: ‘on’
      action:
    • delay: 00:00:02
    • service: camera.snapshot
      data:
      entity_id: camera.hass_tablet # your doorbell camera
      filename: ‘/config/www/facebox/tmp/image.jpg’
    • delay: 00:00:01
    • service: image_processing.scan
      entity_id: image_processing.facebox_saved_images
    • delay: 00:00:02
    • service_template: ‘{% if states.sensor.facebox_detection.state != “unknown” %} tts.google_say {% endif %}’
      data_template:
      entity_id: media_player.bluenano
      message: ‘{% if states.sensor.facebox_detection.state != “unknown” %} {{ states<“sensor.facebox_detection”> }} is at the door {% else %} {% endif %}’
    • service: media_player.volume_set
      data:
      entity_id: media_player.bluenano
      volume_level: 0.9

The way the Microsoft face identify component works is to fire events when faces are detected, and I think this is how I will develop Facebox.

I’ve also experimented with adding attributes for each face that is configured, so that there is always data for each face and a template sensor can be used to create a binary sensor for each face. However the detection of a face is only true at the time the image is captured, so I’m not sure that using attributes to create a binary sensor makes sense. For instance, if the sensor hadn’t been updated for 24 hours the binary would still display as ON, which is clearly not true.

I’m open to suggestions on how best to do this

I only have history in domoticz. In domoticz you can also switch dummy sensors for x- minutes. Then it auto’s off. But firing events is perhaps the clean way to do it. No expert though.

@kingofsnake. the sample configuration Robin posted earlier included a binary sensor.

this is the example:

  • platform: command_line
    name: Facebox Loaded
    command: cat /config/sensors_components/x.sensors/x.facebox.txt | awk ‘FNR==1 {print $2}’ | sed ‘s/"//g’
    device_class: connectivity
    scan_interval: 5
    payload_on: “name”
    payload_off: “dead”
    delay_off: 00:05:00
    delay_on: 00:05:00
    value_template: ‘{%- if value == “name” -%} name {%- else -%} dead {%- endif -%}’

Thanks, I will take a look at it! And will check if the voice announcement can be made by alexa.

In your case it would something like

  • platform: template
    sensors:
    facebox_match:
    device_class: connectivity
    value_template: ‘{{ states.sensor.person_at_frontdoor.state == “name” }}’
    scan_interval: 5
    payload_on: “name”
    payload_off: “dead”
    delay_off: 00:05:00
    delay_on: 00:05:00
    value_template: ‘{%- if value == “name” -%} name {%- else -%} dead {%- endif -%}’

Tried using face recognition unlock door it with binary sensor and triggering an event. Event triggering seems to be the best and secure way. If face detection is activated, the facebox returned values are used and if this person is recognized an event is triggered, underneath examle switches on garden lights. Few things I want to make it even better:

  • last ten faces (recognized and not recognized) including the pictures stored in a window
  • adjust '% of probability / confidence to trigger the event
  • announce person on Alexa (or google home)
    But very happy now this all works! thanx Juan and Rob for making this possible! What I like most is there is no external communication necessary, that makes it (or feels like :wink: a more secure way to use face recognition in my opinion.

Automation entry

  • id: ‘1234567890123’
    alias: activate_face_obama
    trigger:
    platform: template
    value_template: “{% if is_state(‘sensor.facebox_detection’, ‘obama’) %}true{% endif %}”
    action:
    service: switch.turn_on
    entity_id: switch.garden_light

configuration Yaml entry
binary_sensor:

  • platform: template
    sensors:
    facebox_match:
    device_class: connectivity
    value_template: “{% if is_state(‘sensor.facebox_detection’, ‘trump’) %}true{% else %}false{% endif %}”

Can you try out the Microsoft face identify component to check you are happy with the way that approach works?

Re last 10 faces that can be achieved with an automation I expect

Maybe not the correct forum, but I really want this face detection to work :slight_smile:
When starting Facebox in docker-compose, I get the following error after the initial startup:

[ERROR]    post shutdown: accept tcp [::]:8080: use of closed network connection
[CRITICAL]    command exited during start up
command exited during start up
facebox exited with code 1

My Docker-compose looks like this:

facebox:
  image: machinebox/facebox:latest
  container_name: facebox
  restart: unless-stopped
  ports:
    - 8080:8080
  volumes:
    - /home/$USER/docker_files/facebox:/facebox
  environment:
    - MB_KEY= <my registered key>

The 8080 port was being used by Unifi, but I changed this to another port and when I run sudo netstat -ap | grep 8080 I don’t get any output. I suspect that the port 8080 is in use, but I’m a little bit confused on how to figure this out. I tried changing the 8080 port to something else, but this is not possible. Still get the exact same error. I would really like to get some hints on how to resolve this.

Why not just change the host port in facebox

For example

8020:8080

Yes, wouldn’t that be nice :slight_smile: But as I wrote, this does not have any effect. Docker fails with the exact same error message.

Update: There’s an issue registered on github on the same problem I have, so I’ll continue there. But of course, if anyone have any ideas, please share :smiley:

A question out of curiosity:
have you ever tried placing pictures of yourself in fron of cameras that do face recognition? That could be a pretty obvious attack and possible security issue in case someone plans on unlocking their front door with their face.
From what I have read so far still images are used. Hence a picture should work just fine to trigger associated automations.
A great workaround would be, if Facebox itself would provide data on how different the previously recognized face is from the latest detection. When 100% identical it can be assumed to be a picture. If there has been a slight change (shifted angle of the face), then the face has moved and it’s less likely to be a picture. Which of course you have no influence on, Facebox has to do that somehow.
I just wanted to bring this issue to the attention of the readers of this topic. Maybe it should even be mentioned in the documentation of the component that no security-relevant automations should be triggered if such attacks seem possible in the use-case the users have in mind.

I don’t think the reliability is that good (at least yet) to use it for opening/closing doors. I would definitely add additional layer of security.

I got that error and as they mentioned figured that my older cpu didn’t have AVX support

@danielperna84 I would advise that face detection should be complementary to other auth methods. An example use case would be, on entering a code to unlock by alarm panel, I must also be recognised on a camera image. This is more secure, but again could be spoofed. Since no method is 100% secure, it’s always up to the user to decide what level of security/complexity they are happy with.
Cheers

I’m curious, why the intermediate file camera? Why not just have the automation pull the current camera image? It seems it adds extra time to the recognition by making me delay for the file to be written. What am I missing?

I’m not sure exactly which automation you are referring to (Hackster article I assume) but yes you can classify on a regular camera feed

Thanks for responding, and all your work on this! I guess I was wondering if there was an advantage I haven’t thought of to using a file cam intermediate. Also, +1 to your idea to fire off events for recognitions. That would be awesome.

1 Like