Face detection with Docker/Machinebox

May I ask what Machonebox/facebox is used for on a HA setup? It seems to be very heavy on resourses to run. Are there no lighter alternatives?

Face detection/recognition. Its using a deep learning model, they are all similar sizes and resources. You could use something like openCV harrcasscade but less accurate

I use theXeoma platform that is very accurate. The face recognition are very precis and with the new version can also identify human emotions from the pictureframe.
Added a new “Object Recognizer” module. The module can automatically classify objects in the camera field of view (person, car, animal, bird, motorcycle) and respond - or ignore - to selected types of objects. “Object Recognizer” is ideal in environments where there is movement expected in the field of view, and you need the system to respond (or not respond) only to a certain type of object.
The system can be run on a Rpi with 3-4 cameras and has lods of modules. Cost is 44 $ for one camera pro licence one time fee if you want all modules. Www.felenasoft.com/xeoma/en/

1 Like

Awesome. Will look into this. Thanks.
Also can it work on a spare Rpi2 or Rpi Zero I have sitting idle?

I would recommend giving it a try before paying anything, I doubt performance will be great, and Pi zero probably terrible if at all. Would be pleased to hear otherwise mind you

It works on rpi3. Before I moved to nuck I controlled 5 cametsd with an rpi. Number plates, faces, motion, smoke, sound. But ofc Rpi is limited. But for me it is a perfect solution with the nuc, I dont want any internet dependcy for my ha or security system

OK a couple of people have asked how they can download their state file after training facebox.

Once you have trained facebox you can download the state file using:

curl -o state.facebox http://localhost:8080/facebox/state

If you restart facebox and loose the state, you can upload your saved state file using:

curl -X POST -F '[email protected]' http://localhost:8080/facebox/state
1 Like

I think You can do that via the web UI as well right?

Yes, but you could use a shell command with the CURL

Considering portainer addon has been released for Hassio?
Could installation be as simple as adding a container through portainer on Rpi2b Hassio install?
I don’t mind waiting a few seconds for facial recognition anyway.

Trying to figure out the easiest way to try out facial recognition

Facebox won’t run on a Pi, requires more RAM. You could try a cloud host eg Azure or GCP

Ok. Thanks a lot!

got a 4 core nuc 8gb ram 120gb ssd. tried installing facebox using this image machinebox/facebox #facebox_noavx. Gives me error saying not enough ram/memory and cannot start facebox.

any help please?

thanks.

Have this up and running (thanks!) but quick query.
Is it possible to display the image with face outlined, % cofnidence and detected name?

Would like to telegram a notification also with pic outlined above

Checkout docs for tensorflow object detrection and others, to find scripts for adding bounding boxes to an image. We need to add this capability to the platform

1 Like

Thanks @robmarkcole, will do.

Have another question about my automation script.
The notify telegram will send the picture with a timestamp but does not include the message element.

Wondering if any of the more experienced users can spot an issue with it?
Have checked the docs and other notify examples and I believe I have it correct.

- id: facebox_announcement
  alias: 'Facebox Announcement'
  initial_state: on
  trigger:
    platform: event
    event_type: doorbird_gate_motion
  action:
  - delay: 00:00:02
  - service: camera.snapshot
    data:
      entity_id: camera.gate_live
      filename: '/config/www/facebox/tmp/image.jpg'
  - delay: 00:00:01
  - service: image_processing.scan
    entity_id: image_processing.facebox_saved_image
  - delay: 00:00:02
  - service: media_player.volume_set
    data:
      entity_id: media_player.kitchen_echo_dot
      volume_level: 0.9
  - service_template: '{% if states.sensor.facebox_detection.state != "unknown" %} media_player.alexa_tts {% endif %}'
    data_template:
      entity_id: media_player.kitchen_echo_dot
      message: '{% if states.sensor.facebox_detection.state != "unknown" %}  {{ states("sensor.facebox_detection") }} is at the door {% else %} {% endif %}'
  - service: notify.telegram
    data:
      message: '{% if states.sensor.facebox_detection.state != "unknown" %}  Facebox triggered. {{ states("sensor.facebox_detection") }} is at the door {% else %} Facebox triggered. {% endif %}'
      data:
        photo:
          file: '/config/www/facebox/tmp/image.jpg'
          caption: '{{now().strftime("%d.%m.%Y-%H:%M:%S")}}'
  - service: media_player.volume_set
    data:
      entity_id: media_player.kitchen_echo_dot
      volume_level: 0.5

Hi there!

So, I’ve been playing with facebox/cameras/HA for days now and not all is going well…
Here is my setup:

Synology with docker running:

  • Home assistant
  • node-red
  • Facebox
  • MQTT (not used for this)

Config.yaml:

 camera:
   - platform: synology
     url: https://IP-adress-deleted:5002
     username: !secret syno-name
     password: !secret syno-pwd
     verify_ssl: false
   - platform: local_file
     name: Saved Image
     file_path: /config/tmp/balcony.jpg
image_processing:
  - platform: facebox
    scan_interval: 10000
    ip_address: 127.0.0.1
    port: 8080
    source:
      - entity_id: camera.local_file 

Connection to synology cameras and local camera is working (I can get entities displayed just fine in).

sensors.yaml

  - platform: template
    sensors:
      facebox_detection:
        friendly_name: 'Facebox Detection'
        value_template: '{{ states.image_processing.facebox_saved_image.attributes.faces[0]["name"].title}}'

Since I have entity camera.saved_image - I changes here from facebox_saved_images to facebox_saved_image - and this entity DOES indeed show latest snapshot after motion is detected.

And last - automations.yaml:

- id: '1552944284375'
  alias: SS Balcony webhook
  trigger:
  - platform: webhook
    webhook_id: motion_balcony_hook
  condition: []
  action:
  - data:
      entity_id: camera.terasa2
      filename: /config/tmp/balcony.jpg
    service: camera.snapshot
  - data:
      entity_id: image_processing.facebox_saved_image
    service: image_processing.scan

And this automation does work as intended (I had also notification here so, yeah, snapshot is triggered,
just fine, using node-red - tested via http call too).
Facebox is nicely setup (i think) - I can check it via web interface with post command on snapshot and it does work ok - recognising face if there is one. BUT, automation does nothing.

I get frequently:

2019-03-27 05:18:23 ERROR (MainThread) [homeassistant.components.image_processing] Error on receive image from entity: Camera not found
2019-03-27 08:05:03 ERROR (MainThread) [homeassistant.components.image_processing] Error on receive image from entity: Camera not found

sensor.facebox_detection is always unknown
image_processing.facebox_local_file is always 0

Any ideas? Sorry for a long post!!!

Do you mind sharing your node red flow? I’ve been struggling to get this to work in Node Red.

Thanks

Here is copy of it…
It uses synology to create webhook based on what camera has motion.
But since 0.90.x I’m also receiving new errors that webhooks are not registered…

[{“id”:“3a3710e2.b0ec4”,“type”:“tab”,“label”:“Synology webhook”,“disabled”:false,“info”:“”},{“id”:“6031b9f1.decbb8”,“type”:“http in”,“z”:“3a3710e2.b0ec4”,“name”:“”,“url”:“/synology_flows/:webhook”,“method”:“get”,“upload”:false,“swaggerDoc”:“”,“x”:176.5,“y”:119.99999904632568,“wires”:[[“11e82352.ac456d”]]},{“id”:“89f6a2be.82e43”,“type”:“http request”,“z”:“3a3710e2.b0ec4”,“name”:“”,“method”:“use”,“ret”:“txt”,“paytoqs”:false,“url”:“https://PRIVATE:8123/api/webhook/{{req.params.webhook}}”,“tls”:“b6c11d45.7c62c”,“proxy”:“”,“x”:608.500072479248,“y”:408.99999713897705,“wires”:[[“30c1cbb2.dc86a4”]]},{“id”:“30c1cbb2.dc86a4”,“type”:“http response”,“z”:“3a3710e2.b0ec4”,“name”:“”,“statusCode”:“”,“headers”:{},“x”:830.5000419616699,“y”:539.0000066757202,“wires”:},{“id”:“11e82352.ac456d”,“type”:“function”,“z”:“3a3710e2.b0ec4”,“name”:“”,“func”:“msg.payload = {};\nmsg.payload = "{}";\nmsg.headers = {};\nmsg.headers[‘Content-Type’] = ‘application/json’;\nmsg.method = "POST"\n\nreturn msg;”,“outputs”:1,“noerr”:0,“x”:409.5000190734863,“y”:242.99999904632568,“wires”:[[“89f6a2be.82e43”]]},{“id”:“b6c11d45.7c62c”,“type”:“tls-config”,“z”:“”,“name”:“”,“cert”:“”,“key”:“”,“ca”:“”,“certname”:“”,“keyname”:“”,“caname”:“”,“servername”:“”,“verifyservercert”:true}]

Sorry again, having trouble importing your flow. :slight_smile: