Hi all - a newbie with Node-RED so please bear with me. I am trying string my MotionEye camera into Deepstack for analysis, and then output to a notification (preferably through TelegramBot). I’m running into something basic - I am able to take a snapshot and the file is created, but I don’t know how to pass that into Deepstack.
I’m using node-red-contrib-deepstack (node) - Node-RED and the input is supposed to be an image buffer as msg.payload:
Node Object Detection
Sends an image to the Deepstack Object Detection API and outputs the predictions.
Inputs
The input message should contain the image to process.
msg.payload: Image buffer to process.
How would I go about pushing the file as an image buffer into msg.payload so the Deepstack node can process? Any tip or hint would be greatly appreciated.