LLM Vision with Unifi Protect

Since most LLM Vision users seem to be using Frigate, I thought I would start a thread about using it with Protect. I’m using the actions but not the blueprint. I’m mostly using Qwen3-vl 32b on a mac. I find that Qwen3-vl has excellent spatial analysis ability. I’m sure that Gemini 3 Pro is better, but not local of course.

I mostly use GPT-OSS 120B as my general purpose AI

My biggest issue is that the image selected to display on the cards does not usually show a frame that represents why the automation was triggered. The description I get from the LLM is good.

So I’m looking at using the image Protect captures to represent the tigger, which is much better. I’m also thinking about having the full video clip available to HA. I write the clip to the media folder from Protect. This can be setup from the Protect storage setup screen.

What are you doing?

1 Like

Protect fires a detection event (motion on camera, doorbell etc)

triggers to the llm vision blueprint (quick search should get you a hit)

llm of choice. (cache the result)

Using Llama3.2-vision.