Face detection with Docker/Machinebox

So I took a bunch of photos this morning using the Doorbird camera in bright good light. I ran them through the teaching python script and it came back with the following:

[/share/homeassistant/config/facebox] # sudo python teach_facebox.py

facebox health-check passed

Teaching of file:image2.jpeg failed with message:no faces detected

Teaching of file:image.jpg failed with message:no faces detected

Teaching of file:image3.jpeg failed with message:no faces detected

Teaching of file:image4.jpeg failed with message:no faces detected

Teaching of file:image5.jpeg failed with message:no faces detected

Teaching of file:image6.jpeg failed with message:no faces detected

Teaching of file:image7.jpeg failed with message:no faces detected

Teaching of file:image8.jpeg failed with message:no faces detected

Teaching of file:image9.jpeg failed with message:no faces detected

Teaching of file:image10.jpeg failed with message:no faces detected

Teaching of file:image11.jpeg failed with message:no faces detected

Teaching of file:image12.jpeg failed with message:no faces detected

[/share/homeassistant/config/facebox] #

This is an example of one of the shots:
image10

All failing. Any ideas? Shot seems ok to me.

Also, when I try use the live demo on the local Facebox webpage and link an image (including the holiday snaps that were being accepted by the teaching script) I get the following error:

faceClient: box is not ready, try again later

@lockytaylor things to try:

  1. When training crop the face
  2. train with images under a number of illunination conditions (quite a lot of glare on the face in that photo)

ps that error message makes me suspicious

Cheers

1 Like

Will do with the cropping. I did try get the photos from different angles and light on my face. I will try some more images at different times of the day.

Yes error a bit weird when it says it is ‘ready’ on the console log as well as on the facebox webpage…

I am using the non-avx version as the J1900 doesn’t support it.

Ok tried it with cropping:

[/share/homeassistant/config/facebox] # sudo python teach_facebox.py

facebox health-check passed

Teaching of file:image2.jpeg failed with message:no faces detected

Teaching of file:image.jpg failed with message:no faces detected

Teaching of file:image3.jpeg failed with message:no faces detected

Teaching of file:image4.jpeg failed with message:no faces detected

Teaching of file:image5.jpeg failed with message:no faces detected

Teaching of file:image6.jpeg failed with message:no faces detected

Teaching of file:image7.jpeg failed with message:no faces detected

Teaching of file:image8.jpeg failed with message:no faces detected

Teaching of file:image9.jpeg failed with message:no faces detected

Teaching of file:image10.jpeg failed with message:no faces detected

Teaching of file:image11.jpeg failed with message:no faces detected

Teaching of file:image12.jpeg failed with message:no faces detected

Seemed to really zip through them too - to fail all of them was about 1-1.5 seconds?

I am thinking it is just the quality of the images… Or something with the file format. If I use good quality images it teaches fine:

[/share/homeassistant/config/facebox] # sudo python teach_facebox.py

facebox health-check passed

Teaching of file:image2.jpeg failed with message:no faces detected

Teaching of file:image.jpg failed with message:no faces detected

File:locky 4.jpg taught with name:Locky

File:locky 2.jpg taught with name:Locky

Teaching of file:image3.jpeg failed with message:no faces detected

Teaching of file:image4.jpeg failed with message:no faces detected

Teaching of file:image5.jpeg failed with message:no faces detected

Teaching of file:image6.jpeg failed with message:no faces detected

Teaching of file:image7.jpeg failed with message:no faces detected

Teaching of file:image8.jpeg failed with message:no faces detected

Teaching of file:image9.jpeg failed with message:no faces detected

Teaching of file:image10.jpeg failed with message:no faces detected

Teaching of file:image11.jpeg failed with message:no faces detected

Teaching of file:image12.jpeg failed with message:no faces detected

Teaching of file:locky 3.jpg failed with message:multiple faces detected

Thanks @robmarkcole for your help with all this.

Ok so I tried your Deepstack integration instead and that seems to work with the images fine.

However on my J1900 that also runs all my home automation and other services, it takes about 10 seconds. I understand I can use the Intel NCS to speed this up or is that only on a Pi?

I notice you have quite a bit of knowledge about these services and after your advice.

What I am seeking to do is just facial recognition at this stage. I have RPi’s and the NAS - what do you suggest is the best software/hardware configuration to do this with a 1-2 second response time? Do you suggest using the NAS with the NCS or run it on a separate Pi? Or should I use something like the Tensorflow lite although it looks like that doesn’t do recognition, just detect?

I am wanting to use the facial recognition to 1) announce known faces that are at the door when there is a movement or doorbell pressed event and 2) use it as an extra authentication method for when I buy a smart door lock.

Any advice would be appreciated.

Thanks.

Having eliminated the images as the problem, my best guess is that:

  1. Either the training script is broken (unlikely but possible)
  2. You hardware is underpowered, possibly resulting in very long processing times, memory issues and/or timeouts

Intel Nuc is popular with people, personally I am using an old macbook.
Please jump to the deepstack thread to discuss deepstack further, that is a much more active thread.
Cheers

When using

camera
  - platform: local_file

together with

scan_interval: 10000

What could be the best way to prevent a detection from the last image after 10.000 sec?
My Idea is to update file_path to a blank image after every detection automation.
(I am using the image_processing.detect_face event trigger, so the last image will always trigger.)

Or does anybody know a good approach with a condition?

For anyone interested, below is my answer to my question and the automation i am using it in.

This automation takes a few camera snapshots (in this example three but could be extended) and saves them for face detection.
After a few seconds the second automation starts to process images with facebox.

I thought of reducing the code by using 4 scripts and loop them with templating and a counter
but i can’t get the logic right in my head. Especially when the last code block with blank image is different. In the end all additional counters and scripts needed would complicate things more.
I really would love some for loops in actions.

- alias: face detection take snapshots
  trigger:
    platform: state
    entity_id: binary_sensor.flur_eingangstur_75 # Entrance Door
    from: 'off'
    to: 'on'
  action:
# Motion Sensor near Camera, if needed. My Camera is pointing straight to my Door
# so i only need 2 sec delay to wait for the door fully open
#    - wait_template: "{{ is_state('binary_sensor.flur_sensor_flur_26', 'on') }}"
#      timeout: '00:01:00'
    - delay: '00:00:02'
    - service: camera.snapshot
      data:
        entity_id: camera.ip_webcam
        filename: /config/www/facebox/tmp/image.jpg
    - delay: '00:00:02'
    - service: camera.snapshot
      data:
        entity_id: camera.ip_webcam
        filename: /config/www/facebox/tmp/image1.jpg
    - delay: '00:00:02'
    - service: camera.snapshot
      data:
        entity_id: camera.ip_webcam
        filename: /config/www/facebox/tmp/image2.jpg


- alias: face detection facebox scan images
  trigger:
    platform: state
    entity_id: binary_sensor.flur_eingangstur_75
    from: 'off'
    to: 'on'
  action:
#    - wait_template: "{{ is_state('binary_sensor.flur_sensor_flur_26', 'on') }}"
#      timeout: '00:01:00'
    - delay: '00:00:03' # wait for the first image
    - service: local_file.update_file_path
      data:
        entity_id: camera.saved_image
        file_path: /config/www/facebox/tmp/image.jpg
    - delay: '00:00:01' # wait for the file_path update
    - service: image_processing.scan
      entity_id: image_processing.facebox_saved_image
    - delay: '00:00:04' # time for facebox to process image
    - service: local_file.update_file_path
      data:
        entity_id: camera.saved_image
        file_path: /config/www/facebox/tmp/image1.jpg
    - delay: '00:00:01'
    - service: image_processing.scan
      entity_id: image_processing.facebox_saved_image
    - delay: '00:00:04'
    - service: local_file.update_file_path
      data:
        entity_id: camera.saved_image
        file_path: /config/www/facebox/tmp/image2.jpg
    - delay: '00:00:01'
    - service: image_processing.scan
      entity_id: image_processing.facebox_saved_image
# Always last update_file_path code block, load a blank image, prevent detection after 10.000 sec with old image
    - delay: '00:00:04'
    - service: local_file.update_file_path
      data:
        entity_id: camera.saved_image
        file_path: /config/www/facebox/tmp/blank.jpg
    - delay: '00:00:01'
    - service: image_processing.scan
      entity_id: image_processing.facebox_saved_image

# This example is for notification, the condition is important to only trigger once when more images 
# are scanned in a short time
- alias: Unknown recognised
  trigger:
    platform: event
    event_type: image_processing.detect_face
    event_data:
      matched: false
  condition:
    condition: template
    value_template: "{{ ((as_timestamp(now()) - as_timestamp(states.automation.unknown_recognised.attributes.last_triggered)) > 60 )}}"