Do I need to start thinking about additional hardware an Frigate (beyond my Home Assistant Blue/Unifi cameras (non-AI)) in order to trigger events when a Heron lands in my pond to steal my fish? Is that best practice?
On a more generic note, referring to my issues with detecting animals different then my cat… not easy to be 100% correct but it all (for me then) depends on what would be the action if the detection is positive, i.e. will you have a sound be triggered to scare off the bird. triggering false-positives is the main risk in automation and might annoy someone (neighbours etc.)
I ofcourse have no definitive knowledge on how good AI can be in this matter.
For my cat, the only good solution was a chip-detector which does not help with Herons
Frigate plus coral works well for me. You select the object that you wish to be detected. However, a small muncjac deer is recognised as a dog. And I have no idea whether a Heron would be recognised as a bird. But the system is so responsive for me birds are captured / detected mid flight
Be aware that this can be intensive process. A detector like Coral is a must
my plan is to trigger a sprinkler that fires across the pond and should be enough to scare them. I don’t mind ducks and they’d likely ignore it anyway.
I’m mostly wanting to avoid triggering it when humans are at the pond, that’s why motion won’t do. All other animals are fair ‘game’ (sorry).
Sounds like a plan. I use a snapshot analyser via deepstack which is pretty good at detecting ‘person’ So this might also be a solution, if detected and if not ‘person’ > fire-away
I feed my driveway camera images into it for some fun and the description normally comes back with e.g. the color and the model of the car that’s driving on my property.
Asking the right question, I can easily imagine that you can get a response back, if/in case the motion that triggered the images to be analyzed actually shows a crane - it requires a (movement) trigger of some sort, though.
These blueprints are awesome and I’m sure I’ll make use of them but for what i’m trying to achieve, I want to have the response from an LLM to trigger something when it says that it has seen a bird or even just an animal. The blueprints shared can just alert a device, not trigger an action.
I guess parsing the response for the word ‘heron’ - and triggering an action based on that - would be an option.
You might have to include specific instructions in the prompt like “look especially for a heron in the image and confirm if one is seen, but do not comment on it if there’s no heron in the picture”
I do something similar for checking if my trash cans have been moved or turned /rotated when taking photos on trash day to determine, if the cans have been picked up and put down again.
I’d then use a template sensor to check for the word ‘heron’ in the response - or multiple other words in the same sensor or multiple sensors for different words - and trigger the desired actions off of the value of the template sensor:
I’m sure there are a dozen other - and maybe probably simpler - ways to do that, though.
But the input_text helper would also give me a history of the responses.