Integrating Blue Iris, Pushover, Node-RED, and Amazon Rekognition

If you don’t see bounding boxes around objects with “person”, “car”, labels in your notifications, you may want to read this.

The original how-to sends the image before rekognition to Pushover (for iOS notifications, read on). It doesn’t send the image with object bounding boxes that were returned by Amazon Rekognition. The bounding boxes are written to a new image by the HASS-amazon-rekognition component.

The HASS-amazon-rekognition component creates .jpg files with this filename format:

rekognition_<entity_name>_latest.jpg

eg. rekognition_cam_abc_latest.jpg

For the original post in this thread, this maps to:

"rekognition_" + msg.data.base_id + "_latest.jpg"

So the pushover preparation (function node) would look something like:

msg.payload = "Sensor: " + msg.data.new_state.attributes.friendly_name + "\n" + "Person Confidence: " + msg.person_confidence + "%";
msg.topic = "Person Detected";
msg.device = "YourPushoverDeviceName";
msg.priority = 1;
msg.image = "/config/www/camera_snapshot/rekognition_" + msg.data.base_id + "_latest.jpg"
return msg;

Assuming you configured amazon_rekogntion to write files to the same directory:

save_file_folder: /config/www/camera_snapshot/

For iOS notifications, the call_service node (domain=notify, service=mobile_app_myphone) has Data field:

{
    "message": "Person {{ payload.person_confidences }}%: {{ payload.friendly_name }}",
    "data": {
        "attachment": {
            "url": "https://yourdomain.duckdns.org:12345/local/camera_snapshot/rekognition_{{ payload.friendly_name }}_latest.jpg"
        }
    }
}

Where a previous node copied msg.data.base_id to msg.payload.friendly_name

Now my notifications show the ROI (green box) and red/yellow boxes of all detected objects.

Thanks again for all of the work you’ve put into updating my instructions. I wrote this a while ago and never changed it as the component was updated. At some point I’ll need to overhaul these.

1 Like

Instructions updated!

2 Likes

You can update them again now, as the plugin is now event based, and supports multiple targets :grinning:
So an example flow of detecting cars and persons would be welcome :+1:

So which one is the most recent one? And what are the diffs with the old ones…

For Blue Iris Video Stream tweaks you can go here to not reduce your image quality.

I am still new to home assistant and nodered, however instead of increasing the quality of the video stream from 50% to 100% as outlined in the post above. The below link would give you an image snapshot of 100% quality. You would still need to adjust the Resize output frame width setting and maybe zero frame latency…
http://:/image/?q=100
e.g.
http://192.168.2.216:81/image/FP1?q=100

How would I set that up instead of using your code to convert from mjpg to jpg?

Here’s some good news from the developer of Bue Iris, post 12

How do I get the snapshot part to work, I followed the instructions in the original post and I am not seeing the snapshot generate and I cannot see how to trigger the snapshot

@TaperCrimp - Any ideas?

The upstream node (usually Template type) from the Snapshot node sets the entity-id of the camera that is to do the snapshot. In my flow the upstream Template node is:


{
  "data": {
    "entity_id": "{{data.camera}}",
    "filename": "/config/www/camera_snapshot/snapshot_{{data.base_id}}.jpg"
  }
}

One way to troubleshoot this is to put a Debug node after the Template node and check the value of msg.data.camera (the entity-id). Then ensure that entity-id matches your camera’s entity-id.

Then, check that /config/www/camera_snapshot/ is a legit directory and see what files are there after manually triggering a camera motion event with Developer Tools > MQTT.

Hmm, looking at the latest version of post #1, it seems @TaperCrimp removed the Call Service node that calls camera.snapshot.
I kept that Call Service node in my flow, but I don’t see it mentioned in post #1 anymore.

That would be correct. The older versions of the custom plugin required it if I remember correctly. No need for it in the current versions.

Ah ha. I see. I kept camera.snapshot around because sometimes i still want a notification + pic when Person(s) are outside the ROI.

Yes, for 99% of use-cases, you don’t need to call camera.snapshot because the rekognition component saves an image after processing

@TaperCrimp, i think there’s a bug in post #1
The notification is using the old snapshot dir:

msg.image = "/config/www/camera_snapshot/

The screenshot seems to use the new dir.

You are correct. Do you have it running by any chance? I switched over the Deepstack and I’m drawing a blank on the default file name that the Rekognition component uses.

EDIT: ignore that, found it. Thanks much.

Do you use NodeRed with Deepstack ? if so, how do you send images to it?
Tried to find a Deepstack node, but no luck so far.

I’ve actually switched all of this over to your Deepstack integration . Local, fast, and works extremely well. Great job!

Oh man…I just came across this thread today, was reading it through, got all excited and ready to start setting this up, and then I see your post about Deepstack. Gahh…now I want that instead :smiley: Especially if the Rekognition plugin isn’t being maintained.

How much we gotta bribe you to create an excellent guide like this but for using Deepstack integration instead?

1 Like

I think there are enough folks using the Rekognition integration that the community will keep it working (at a minimum). I rely on it daily so I’m willing to fix bugs if it breaks.
I’m interested in trying Deepstack, however my list of HA to-do’s for the house is long, and Rekognition “just works” so Deepstack is lower on my list.

Ha, it’s exactly the same. You just need to enable the Deepwatch integration and change image_processing.rekognition_ to image_processing.deepstack_object_ in the Convert node.

At this point I’ve tried Deepwatch, Rekognition, and Sitehound. I had a ton of false-postives with Sitehound. Deepwatch and Rekognition seem to be equally good at detecting people, but Deepwatch runs locally. They’re both fast though.

1 Like

Ha awesome! Glad to hear. Thanks for that clarification and thanks for putting all the work in this to help the community! Much appreciated and really helps keeping the entire ecosystem progressing.