Integrating Blue Iris, Pushover, Node-RED, and Amazon Rekognition

I lost all my node-red flows, is there a place where I can get the flow of this?

Posted in another comment.

@robmarkcole I’ve actually switched all of this over to your Deepstack integration. Local, fast, and works extremely well. Great job!

2 Likes

@TaperCrimp I’ve been using rob’s rekognition component for the last 6 months and have zero complaints. Besides being local (presumably an immediate response) and free though, are there any significant differences between it and deepstack? Is the model accuracy comparable?

1 Like

Nothing major. Deepstack seems slightly faster but that could be due to latency. I’ve also got decent hardware running Deepstack. Accuracy is pretty close. That, and @robmarkcole said he was no longer maintaining the Rekognition plugin a while back. I’d also get an error with the Rekognition component on every other restart.

2 Likes

Great to hear you are getting along with Deepstack, as it is local there are potential speed advantages. Some members of the community are helping out with the rekognition integration, so it still has legs!

1 Like

I just loaded it but havent got it working as yet, me stupid. But i do have the coral should i use the local server since i have the coral.

Great Project! I got it up and running detecting people just fine but trying to modify it to detect “canine” to set off an alarm when the dog is eating the cats food and I can’t seem to get it working.

I changed all the instances of “person” to “canine” and set the target on the camera to “canine” as well, the flow runs up until the “Person Check” current state node, i’m not sure how to see if its getting a > 0 for canine.

Edit 2/21/2020: This is a known problem where only “common” objects get an instance count. It detects canine but never sets it from 0 to 1 even if its over 80% confidence. There is a PR on the integration here with a workaround, tested and working for canine! https://github.com/robmarkcole/HASS-amazon-rekognition/pull/20

@TaperCrimp could you share your updated node-red setup where you switched over to deepstack? I got it up and running with the integration from @robmarkcole but I’m getting stuck with image processing. Really cool stuff!

Yep, included below. I used HACS to install the integration.

[{"id":"8d0e114.5311af","type":"server-state-changed","z":"4dc6f85c.c0d998","name":"Camera Motion","server":"744f1fa7.a5248","version":1,"entityidfilter":"binary_sensor.\\w{3}_motion","entityidfiltertype":"regex","outputinitially":true,"state_type":"str","haltifstate":"on","halt_if_type":"str","halt_if_compare":"is","outputs":2,"output_only_on_state_change":false,"x":140,"y":380,"wires":[["8122d953.f9b428"],[]]},{"id":"8122d953.f9b428","type":"change","z":"4dc6f85c.c0d998","name":"Convert","rules":[{"t":"delete","p":"payload","pt":"msg"},{"t":"set","p":"data.base_id","pt":"msg","to":"$match(topic, /binary_sensor.(\\w{3})_motion/).groups[0].$string()","tot":"jsonata"},{"t":"set","p":"data.camera","pt":"msg","to":"\"camera.\" & $.data.base_id","tot":"jsonata"},{"t":"set","p":"data.image_processing","pt":"msg","to":"\"image_processing.deepstack_object_\" & $.data.base_id","tot":"jsonata"},{"t":"set","p":"data.camera_snapshot","pt":"msg","to":"\"/config/www/camera_snapshot/snapshot_\" & $.data.base_id & \".jpg\"","tot":"jsonata"},{"t":"set","p":"payload.entity_id","pt":"msg","to":"\"image_processing.deepstack_object_\" & $.data.base_id","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":300,"y":380,"wires":[["35a00c81.f0c964"]]},{"id":"9afb62d2.e806","type":"api-current-state","z":"4dc6f85c.c0d998","name":"Person Check","server":"744f1fa7.a5248","version":1,"outputs":2,"halt_if":"0","halt_if_type":"num","halt_if_compare":"gt","override_topic":false,"entity_id":"","state_type":"num","state_location":"","override_payload":"none","entity_location":"","override_data":"none","blockInputOverrides":false,"x":620,"y":380,"wires":[["28e3f391.ac978c"],[]]},{"id":"35a00c81.f0c964","type":"api-call-service","z":"4dc6f85c.c0d998","name":"Deepstack","server":"744f1fa7.a5248","version":1,"debugenabled":false,"service_domain":"image_processing","service":"scan","entityId":"","data":"{\"entity_id\":\"{{data.image_processing}}\"}","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":450,"y":380,"wires":[["9afb62d2.e806"]]},{"id":"e7105bcf.11d7c8","type":"pushover api","z":"4dc6f85c.c0d998","keys":"9f1dc855.c8bb68","title":"","name":"Alert","x":910,"y":380,"wires":[]},{"id":"28e3f391.ac978c","type":"function","z":"4dc6f85c.c0d998","name":"Payload","func":"msg.payload = \"Sensor: \" + msg.data.new_state.attributes.friendly_name;\nmsg.topic = \"Person Detected\";\nmsg.device = \"YourDevice\";\nmsg.priority = 1;\nmsg.image = \"/config/www/camera_snapshot/deepstack_object_\" + msg.data.base_id + \"_latest_person.jpg\"\nreturn msg;","outputs":1,"noerr":0,"x":780,"y":380,"wires":[["e7105bcf.11d7c8"]]},{"id":"744f1fa7.a5248","type":"server","z":"","name":"Home Assistant","legacy":false,"addon":true,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":false},{"id":"9f1dc855.c8bb68","type":"pushover-keys","z":"","name":"Default API"}]

I have a basic question on the node-red flow :slight_smile:
On motion, we take a Cam snapshot and save it… lets say at – 10.01 AM
Again, after a delay of 1s, say at 10.02 AM we request the Object Detection service ( AWS Rek or Deepstack ).

So, is it true – we use different images taken 1sec apart for including in alert ( snapshot ) and one for object detection service. Means if Flash were to come, we could be in the snapshot build not identified by Objection detection. Are we also straining the camera for two snapshots for same action.

Sorry, if i’m thinking this wrong. Just curious as i’m trying to build a basic flow.

update : noticed that with newer code, can get saved bounded output and can use that in notify alert as well, so alert and object recognition are sametime and snapshot for any archival.

I’d also get an error with the Rekognition component on every other restart.

I have created a patch to fix the KeyError: .. endpoint_resolver error that prevents some cameras from starting when HA is restarted. cc: @jeremeyi

The github pull request is:

I am currently running version 1.3 (via HACS) of this and have been using it for about a year maybe. When I try to update to any version higher than this it breaks it for me with the following errors:

2020-05-02 23:35:02 ERROR (MainThread) [homeassistant.components.image_processing] Error while setting up amazon_rekognition platform for image_processing
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/helpers/entity_platform.py”, line 178, in _async_setup_platform
await asyncio.wait_for(asyncio.shield(task), SLOW_SETUP_MAX_WAIT)
File “/usr/local/lib/python3.7/asyncio/tasks.py”, line 442, in wait_for
return fut.result()
File “/usr/local/lib/python3.7/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/config/custom_components/amazon_rekognition/image_processing.py”, line 106, in setup_platform
save_file_folder = config[CONF_SAVE_FILE_FOLDER]
KeyError: ‘save_file_folder’
2020-05-02 23:35:02 ERROR (MainThread) [homeassistant.components.image_processing] Error while setting up amazon_rekognition platform for image_processing
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/helpers/entity_platform.py”, line 178, in _async_setup_platform
await asyncio.wait_for(asyncio.shield(task), SLOW_SETUP_MAX_WAIT)
File “/usr/local/lib/python3.7/asyncio/tasks.py”, line 442, in wait_for
return fut.result()
File “/usr/local/lib/python3.7/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/config/custom_components/amazon_rekognition/image_processing.py”, line 106, in setup_platform
save_file_folder = config[CONF_SAVE_FILE_FOLDER]
KeyError: ‘save_file_folder’
2020-05-02 23:35:02 ERROR (MainThread) [homeassistant.components.image_processing] Error while setting up amazon_rekognition platform for image_processing
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/helpers/entity_platform.py”, line 178, in _async_setup_platform
await asyncio.wait_for(asyncio.shield(task), SLOW_SETUP_MAX_WAIT)
File “/usr/local/lib/python3.7/asyncio/tasks.py”, line 442, in wait_for
return fut.result()
File “/usr/local/lib/python3.7/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/config/custom_components/amazon_rekognition/image_processing.py”, line 106, in setup_platform
save_file_folder = config[CONF_SAVE_FILE_FOLDER]
KeyError: ‘save_file_folder’

Any ideas on what the issue might be, if I revert back to version 1.3 it is fine.

Yeah, I encountered this error too when I installed amazon_rekognition this week. There may be a bug there.
The workaround is to add save_file_folder to your yaml file, like this example:

  region_name: us-west-1
  save_file_folder: /config/www/camera_snapshot/
  target: Person

I will look into if there’s an easy bugfix.
Thanks for reporting the issue because I wasn’t sure if it was unique to my setup or not.

I’ve created a pull request to fix the KeyError bug.

I’ve learned something about the amazon_rekognition component that affects the original how-to created by @TaperCrimp, which is an awesome how-to article that got many of us started.

From the article/original post:

In that directory I have one yaml file for each camera that I’m pulling. In this case we’ll be using amazon_rekognition_fdc.yaml as a reference point, but there’s also amazon_rekognition_bdc.yaml , amazon_rekognition_byc.yaml , etc.

Well, the way HASS-amazon-rekognition by @robmarkcole works is it creates a separate boto client for each file, if you use this pattern. Because the underlying boto library is not thread-safe, the more files you have, the more clients get created, and the greater chance you have of hitting the dreaded endpoint_resolver error (and having missing image_processing entities/cameras).

But there’s another way: HASS-amazon-rekognition and Amazon support multiple cameras per boto client.
What does this mean in plain English? It means you can (and probably should) configure all your cameras in ONE .yaml file, like so (following the article’s examples):

- platform: amazon_rekognition
  aws_access_key_id: !secret aws_access_key
  aws_secret_access_key: !secret aws_secret_key
  region_name: us-east-1
  target: Person
  scan_interval: 604800
  source:
    - entity_id: camera.fdc
    - entity_id: camera.bdc
    - entity_id: camera.byc

So the filename might be something like:
image_processing/amazon_rekognition_all_cameras.yaml

If you do it this way, I think you’ll find a lot fewer missing cameras.
And the pull request posted earlier should make things even better.

2 Likes

Nice, I just saw via HACS that the patch was merged.

FYI, v2.2 (possibly v2) seems to break this automation.

Person Check’ node returns error;

Entity could not be found in cache for entity_id: image_processing.rekognition_person_pac

(pac is the camera name)

Rolled back to v1.8 and ‘targets’ back to ‘target’ and all working again.

Just updated the instructions around the “targets” configuration and the Node-RED “convert” function.

I have omitted the “targets” reference from my yaml as it defaults to Person anyway (also gets around v2’s ‘breaking change’).

It’s the Convert node that was tripping me up. I was looking for references to ‘target/s’. Wouldn’t have thought it was the ‘Person’ reference from the last step that fixes this.

SET:
payload.entity_id
"image_processing.rekognition_" & $.data.base_id

After updating I’m still getting an error but from ‘Person Check’ node

aws_data: object
entity_id: "image_processing.rekognition_pac"
state: NaN

but a bit further down the debug window;

attributes: object
targets: array[1]
0: "person"

should the ‘0: "person"’ be where the data is pulled from rather than ‘aws_data’?