Last year Reolink updated their camera firmwares with ‘Motion Mark’, a Beta feature to identify where AI detected features are in the image (x,y,w.h).
I don’t know if this information is currently exposed through their APIs/ONVIF data but it would be great to work with them to expose this information as event data for Person/Vehicle/Pet detection events in homeassistant.
Being able to define different regions of interest which can be used to conditionally trigger different events would be very powerful.
I can imagine there being a lot of use cases, but as examples:
I want my cameras to record every movement, but I only want to turn on the outside lights when that movement occurs close to the house. Currently this isn’t possible, the integration triggers person/vehicle/animal detection events regardless of where in the cameras field of view they appear. The only control given over this is to disable alerting/recording entirely for certain regions.