Detecting birds that land in my pond. Frigate?

Do I need to start thinking about additional hardware an Frigate (beyond my Home Assistant Blue/Unifi cameras (non-AI)) in order to trigger events when a Heron lands in my pond to steal my fish? Is that best practice?

Or replace my camera with the AI version?

I’m pretty sure the Unifi ai versions (or standard ones with an AI port) only detect ‘animal’ but not which animal. The new AI key might do though.

On a more generic note, referring to my issues with detecting animals different then my cat… not easy to be 100% correct but it all (for me then) depends on what would be the action if the detection is positive, i.e. will you have a sound be triggered to scare off the bird. triggering false-positives is the main risk in automation and might annoy someone (neighbours etc.)
I ofcourse have no definitive knowledge on how good AI can be in this matter.
For my cat, the only good solution was a chip-detector which does not help with Herons :rofl:

Frigate plus coral works well for me. You select the object that you wish to be detected. However, a small muncjac deer is recognised as a dog. And I have no idea whether a Heron would be recognised as a bird. But the system is so responsive for me birds are captured / detected mid flight

Be aware that this can be intensive process. A detector like Coral is a must

Edit: I have pretty much instant Automations trigger on specific object detection

my plan is to trigger a sprinkler that fires across the pond and should be enough to scare them. I don’t mind ducks and they’d likely ignore it anyway.

I’m mostly wanting to avoid triggering it when humans are at the pond, that’s why motion won’t do. All other animals are fair ‘game’ (sorry).

Would Coral plugged into my Home Assistant Blue be the way to go? Or just use a Pi with Coral?

Sounds like a plan. I use a snapshot analyser via deepstack which is pretty good at detecting ‘person’ So this might also be a solution, if detected and if not ‘person’ > fire-away :slight_smile:

If you’re ok with a cloud based AI, you could try this custom component

If you’re okay with Google GenAI to analyze your images you could try this blueprint:
Camera Snapshot, AI & Notification on Motion

Here’s the community post that goes with it:
Generative AI camera snapshot notification - Blueprints Exchange - Home Assistant Community

I feed my driveway camera images into it for some fun and the description normally comes back with e.g. the color and the model of the car that’s driving on my property.

Asking the right question, I can easily imagine that you can get a response back, if/in case the motion that triggered the images to be analyzed actually shows a crane - it requires a (movement) trigger of some sort, though.

Beware of this annoying issue, though:
Error 429: Quota Exceeded for Google Generative AI Integration - Third party integrations - Home Assistant Community

These blueprints are awesome and I’m sure I’ll make use of them but for what i’m trying to achieve, I want to have the response from an LLM to trigger something when it says that it has seen a bird or even just an animal. The blueprints shared can just alert a device, not trigger an action.

I guess parsing the response for the word ‘heron’ - and triggering an action based on that - would be an option.
You might have to include specific instructions in the prompt like “look especially for a heron in the image and confirm if one is seen, but do not comment on it if there’s no heron in the picture”

I do something similar for checking if my trash cans have been moved or turned /rotated when taking photos on trash day to determine, if the cans have been picked up and put down again.

Are you using one of the above blueprints to do that? I can’t see how to trigger an action based on the response with the above

Nope - used them, or better: their approach, to create my own Google GenAI query, though.

That’s why I suggested that

This would be my approach based on

  1. having a line in the inquiry like response_variable: generated_content
    as well as
  2. using a helper and an action like this:
  - action: input_text.set_value
    data:
      value: "{{ generated_content['text'] }}"
    target:
      entity_id: input_text.gen_ai_fun_fact

I’d then use a template sensor to check for the word ‘heron’ in the response - or multiple other words in the same sensor or multiple sensors for different words - and trigger the desired actions off of the value of the template sensor:

sensor:
  - platform: template
    sensors:
      heron_in_text:
        value_template: >
          {% if 'heron' in states('input_text.gen_ai_fun_fact') %}
            true
          {% else %}
            false
          {% endif %}

I’m sure there are a dozen other - and maybe probably simpler - ways to do that, though.
But the input_text helper would also give me a history of the responses.