Face detection with Docker/Machinebox

Hi all
the Facebox component released in 0.70 is for face detection & recognition using FaceBox. To use you just need to run the FaceBox Docker container and configure the component. All processing is done locally on your machine hosting Docker, so there are no hassles associated with cloud services, and no real installation to perform. The speed of recognition will depend on your hardware, but on my Mac recognition takes 4.5 seconds. You can speed this up by disabling recognition, defaulting to detection only.
Cheers

A script for teaching faces is here https://github.com/robmarkcole/facebox_python

Optimising resources

Image-classifier components process the image from a camera at a fixed period given by the scan_interval. This leads to excessive computation if the image on the camera hasnā€™t changed (for example if you are using a local file camera to display an image captured by a motion triggered system and this doesnā€™t change often). The default scan_interval is 10 seconds. You can override this by adding to your config scan_interval: 10000 (setting the interval to 10,000 seconds), and then call the scan service when you actually want to process a camera image. So in my setup, I use an automation to call scan when a new image is available.

You can also reduce the time for face detection (counting number of faces only) by setting the environment variable -e MB_FACEBOX_DISABLE_RECOGNITION=true when you run Docker. As the variable name states, this disables facial recognition and in my experience detection time is reduced by 50-75%. Note that the teach endpoint is not available when you disable recognition.

18 Likes

Thanks for this. I will try to learn how docker works in order to install it and will try on my machine.
The jupyter notebook in your github is gold!
Just a correction: your Facebox link is pointing to Fakebox. You would like to fix that.

wow sounds great , will get back to you after i install this. Cant waith to trash OpenCV.

1 Like

Iā€™ll look into this more this week but was wondering how do you send FaceBox pictures from a camera feed?

The rest api

would you mind sharing one sample automation ?
For example :

If its the person1 then = something
if its unknow then somthing ?

The use case I have in mind is deactivating an alarm system - only allow disarm if person X is home. Havenā€™t setup myself yet, but its just a case of adding a condition to an automation

OK never mind got it . Thank you so much for this so far works great .

1 Like

If you want to teach facebox from a directory of images, you can use the python script here:

Rob, thank you for creating this. Looking for some guidance.
So got the docker running, tested it with some pictures. It recognises. Great!
The integration in HA needs a bit more clarity/gudiance for wider use
I added the component ā€œfacebox_face_detectā€ which in turn needs
camera:

  • platform: local_file
    file_path: /tmp/image.jpg

If the intention is to get it to recognise a snapshot, how to get the shot into file_path: /tmp/image.jpg and how is it subsequently updated.
Thank you.

Hi Juan
this got merged and should be in HA 0.70. Iā€™ve updated the code on GitHub to match so please update to release v0.2.

Re updating a local_file camera, that will be in 0.69 which is out this weekend I believe.
Cheers

Thank you Rob. Iā€™ve updated to v0.2 (placed the image_processing folder inside custom_components) but the config checking is showing ā€œPlatform not found: image_processing.faceboxā€
any suggestions?

Hmm make sure there are no cached files in the custom_components/image_processing dir and double check your config. I recall that there was some issue with custom_components and not sure if this is resolved or open.

The other thing to try is just delete your custom_components/image_processing dir and place facebox.py in the image_processing dir within Home-Assistant

this is the log
https://hastebin.com/yidelozoqe.sql

I think this may be the issue ā€œImportError: cannot import name ā€˜ImageProcessingFaceEntityā€™ā€

Oh yes, I refactored the image_processing base component in 0.69. For now, just edit the import line in facebox.py to be:

# OLD
from homeassistant.components.image_processing import (
    PLATFORM_SCHEMA, ImageProcessingFaceEntity, CONF_SOURCE, CONF_ENTITY_ID,
    CONF_NAME)

# New
from homeassistant.components.image_processing import (
    PLATFORM_SCHEMA, CONF_SOURCE, CONF_ENTITY_ID,
    CONF_NAME)
from homeassistant.components.image_processing.microsoft_face_identify import (	
    ImageProcessingFaceEntity, ATTR_NAME, ATTR_AGE, ATTR_GENDER)

If this also gives errors just wait 24 hours for 0.69.
Cheers

Thank you Rob. It works.
Now will try to use it.!

1 Like

The link was pointed to FakeBox, not FaceBox, lol

Look forward to trying this. Iā€™m using dlib at the moment, how does this compare?

Many thanks!

Performance will depend on your system and whether you require face recognition or not

Cool, Iā€™m using face recognition with dlib as a backup to disable an alarm and to also give me a TTS overview of my day ahead in the morning when I enter my hallway.

Takes between 4-8seconds to detect my face, any best practices for the camera setup? at the moment Iā€™m using the MJPEG component.