Amazon Face Recognition - ecosistem

:movie_camera: Video demo:


Show & Tell – Amazon Face Recognition (AWS Rekognition)

Hi everyone,
I’d like to share a project I’ve been working on called Amazon Face Recognition.

It’s a complete recognition ecosystem for Home Assistant, built around AWS Rekognition and designed to be fully managed from the Home Assistant UI.

This is not meant to be just a face recognition add-on, but a cohesive system where backend logic, configuration and visualization all work together.


What it does

At a high level, the integration provides:

  • Face recognition using AWS Rekognition Face Collections
  • A visual gallery of all detections with annotated snapshots
  • A UI-driven workflow to manage faces and training images
  • A Region of Interest (ROI) editor to analyze only relevant parts of the image
  • Optional vehicle and license plate recognition
  • Sensors and attributes ready for automations
  • Built-in usage and cost tracking

Everything is handled directly inside Home Assistant.


How it’s structured

The project is composed of three tightly integrated parts:

  • Custom integration
    Handles image processing, AWS communication, sensors and events.
  • Custom Control Panel
    Used to manage faces, training images, ROIs, gallery, plates and synchronization.
    No YAML configuration, no manual service calls.
  • Built-in Lovelace Card
    Automatically available after installation, providing an interactive (read-only) viewer for detection results in dashboards.

The control panel is the source of truth.
The card and sensors always reflect the same backend data.


Why AWS Rekognition

I chose AWS Rekognition mainly for:

  • reliability and accuracy
  • predictable pricing
  • no local GPU requirements
  • easy scaling

ROI support and filtering options help keep AWS usage and costs under control.


Home Assistant–first approach

Some design choices I cared about:

  • no legacy YAML configuration
  • no external scripts or CLI tools
  • no manual JavaScript resources to add
  • everything configured via UI
  • multilingual UI (follows Home Assistant language)

The goal was to make it feel like a native Home Assistant experience, not an external tool glued on top.


Installation & documentation

The integration is installed via HACS and works immediately after restart:

  • control panel available automatically
  • Lovelace card auto-registered
  • no extra setup steps beyond AWS credentials

Full documentation, screenshots and detailed explanations are available in the GitHub Wiki.


This is still an evolving project and I’d really appreciate:

  • feedback on usability
  • real-world testing
  • suggestions or ideas for improvements

If you’re interested in a UI-driven, cloud-based recognition system for Home Assistant, feel free to check it out and share your thoughts.

2 Likes

You should add links to this.

This is pretty cool. I’ve got it installed and running. I’m a big fan of the Rekognition. Works really well.

So many questions and possibilities with this…

  1. Do you add multiple cards? (e.g., 1 per camera)
  2. Is the intended use case to have a camera trigger motion and then your app “looks” at the camera and does recognition on what it sees?
  3. Can you add service where you can upload an image and have it recognized? versus taking a camera snapshot

I use Frigate for record/detection and I can see use case to facilitate face recognition with it. The built in recognition works ok, but having something like this as frontend to AWS would be really cool. I can see using Node-red to automate grabbing the snapshot Frigate takes, sending to your service, and labeling in Frigate.

The card is designed to be a centralized reader. You can use multiple cameras for the service, and all the images will end up on the card. They’ll be sorted from newest to oldest. Furthermore, when you scroll through images and want to view the streaming, the card will display the video from the camera the photo came from. Do your cameras have a trigger? Does your trigger come from Fregate? In that case, you could create a “local” camera that reads the file generated by Fregate (I don’t use Freegate, so I might be a bit sloppy. But it works. I used several “local” cameras to test the component in different scenarios).

Let me know if this is a viable option. Otherwise, next week I’ll try to implement image uploading, even though it would then throw the entire design philosophy of the custom card into disarray.

Thx for response. Yes, I do have a trigger a la MQTT of a person event. Frigate is able to provide an image of the person in the event. My thought was to send that image to your integration. Interesting thought on creating a “local” camera that reads the image. I’ll google how to do that. In meantime, if you have guidance on how to do that, I’d be greatly appreciated.

It’s really simple, go to device & service… + integration… search for: local; write the name of your camera and under it the URL of your file

Been using this for a bit and one downside is opening camera live view from the card. Opens the local “camera” which is a static image. I think an excellent enhancement would be to add a service that allows to send an image and have it associated with a camera. This would make your add-on instantly more valuable. Just a thought. Thx.

I don’t understand. Right now, when you get the notification, you click and go to the custom card, so you can see the scan results. If you need, you can click the “live” button and it displays the camera stream to see the situation in real time. What improvements do you propose to this usage flow?