Face detection with Docker/Machinebox

Ok I’ll play with the templates. Out of interest in templates in developers tools ‘live’ should I see changes if a face is detected?

Yes its live

Ok so {{ states.image_processing.faceid.attributes}} shows me:

{'faces': [], 'total_faces': 0, 'matched_faces': {}, 'friendly_name': 'FaceID', 'device_class': 'face'}

but its not updating if a face is detected. Using iPhone to check its detected.

Please check your entity_id, that’s the only issue I can think of

Checked and double checked, not sure what I’m doing wrong but its not working for me.

If you put your full config on Github I can take a look

@robmarkcole, first, thanks for sharing this.
Can this be used on multiple cameras? I’m thinking of using this for zone presence detection, e.g.: John is in the kitchen.
I understand that hardware would need to be beefy enough to run several instances or analyzing more than a single source, but I would like to know whether this is even possible.

You can list multiple cameras in the config, and you only need one facebox instance running. Clearly the more processing you wish to perform then the more resources you require. @arsaboo is running Facebox on an intel Nuc with 32 Gb RAM so interested to hear how that system performs?

Btw can can also run on cloud:

1 Like

With 2Gb Ram requirement, 32Gb Ram should offer plenty of possibilities. How processor intensive is it?
Thinking of running this on a Win10 desktop with a 2017 Core i7. 6 cameras are present in my house and being able to use 5 would be great.

I’ve not done benchmarking myself, but would be useful to collate stats.
I’ve added info on optimising resource usage to the top of this thread

@robmarkcole i have Hass.io running on a raspberry pi and docker running on my Mac. I’m trying to follow your instructions for maintaining Facebox but some of the shell script and cron commands don’t appear to be working for me. Has anyone else been able to replicate those steps?

Hope you can help.

@juan11perez might be able to help with those scripts

@Maaniac can you be more specific. Which script?
first, does Robin’s teach.py work from the command line? if not you may not have these installed sudo apt install python-pip and pip install requests

I’m going pull the integration for now but will work on it soon, thanks for your help.

@juan11perez following the instructions on hackster i cannot duplicate steps 2 and 4 with the different file structure of both hass.io and the docker container on my mac.

sensor.facebox_detection isn’t working here. Aways returns “unknown” :frowning:

image

Thats intended behaviour - check the logs plz

Only it when I restart HA

image

Scan result

Here’s my config:

image

image

The image processing entity is initialised with no data, this is causing the problem for your template sensor. @juan11perez I assumed the template would update when data is available?

1 Like

@Maaniac, first apologies for the delay. I’ll try to explain what i did.
Im running everything in one machine. So in order to use the host command line to query the facebox container i had to create a bash.sh to generate a text file which essentially returns the name of the jpg you use to query it. So for instance I taught facebox who is juan with a juan.jpg. I use the curl command to confirm juan.jpg is still in facebox if it is it will populate a text file with the name juan. I then use that output for a binary sensor. If the file has juan, then it’s on.

So in your case you’d need your RPI to ssh into your mac to extract the output. something like this for the binary sensor section
command: ssh -l Mac 192.168.1.XXX “cat /wherever/this/is/x.facebox.txt | awk ‘FNR==1 {print $2}’ | sed ‘s/"//g’”

once the sensor is working the automation in point 4 has the trigger.

1 Like