Just to close this out. I had the camera set up fine. My issue was I selected my provider but didn’t change the default model in the blueprint to match. Oops!
Lazy question: for someone that does NOT have frigate. But does have integrated (Reolink and Ring) cameras that have their own motion detection working well in HA… can I use these blueprints? or is it manual without Frigate?
I have set it up and its sending me pictures of every motion event which is nice, but I wanted a snarky remark from AI.
Using Google Gemini, deleted the model. Deleted the ‘state’ and used motion detection events for both cameras - individual automations to troubleshoot… still no luck.
Help! I want sarcastic descriptions of myself immediately
Are you asking because the blueprint didn’t appear after installing LLM Vision? Same here, I am not using frigate.
Here’s the blueprint apparently.
I love this integration - very cool
Has anyone got a solution for the message being truncated? I think this is only an issue when the image is included?
As far as I know, you have to split it up in 2 messages : one with the response, and one with the image.
I am just curious, what does your release schedule look like. I just was looking at your github and noticed you have a recent close PR for Azure support. I could really use that
no I have the blueprint, it should work as I selected ‘camera’ not ‘frigate’. The video camera (Reolink) is properly integrated and even sending me snapshot pictures so I know that part is working. But no AI description text. Chose Google Gemini if thats relevant
ohhhhh I thought you just deleted the model and it chose? maybe thats my issue? Google Gemini… maybe I need to manually input the model… I am getting picture snapshots to my phone but no snarky AI summaries which I desperately need.
EDIT: Got it working, needed to explicitly include the model gemini-1.5-flash
You can lower max_tokens
. This will make the generated text shorter.
Very little time to work on this right now. I think the next release will be first week of 2025.
Thanks for the update! Much appreciated, and hope you have a great holiday and start of the new year!!
Hello friends!
I’m totally new to HA!
I already have an automation running that sends me a notification when someone rings!
Now I would like to refine this with LLM, but somehow I can’t get the text in my notification!
Does anyone have a tip for me?
Thank you very much
@valentinfrlch Unfortunately I have the same problem, but in my case I can successfully Open the URL and view the clip. However in the blueprint, I get the error “Failed to fetch frigate clip”…
When I use the data_analyzer it worked w/o issue when I finally got LLM working and got long responses. Directly updating the input_text helper. But using the stream/image (i havent’ managed to sort out the frigate event id yet) and having to leverage the variable the input_text.set service call I would get failures or whatever even when well under 255 characters, even 100? Is there way to add option to directly update the entity like the input_text helper such as the data_analyzer in the other analyzers?
Thankyou @sdaltons this is now working! Just need to lower tokens so it doesnt truncate the message. Unless you have a solution for that too?
Just installed and trying out the blueprint (v1.3.1), but it doesn’t offer me a list of notification devices:
Yes, I do use the companion app on an iPhone and can send notifications using the notify.mobile_app_gary_iphone action.
Any ideas here?
Thanks
With the following prompt:
“Using minimum vebosity describe any people or vehicles that you see in the driveway. If you do not see people or vehicles please do not respond at all.”
why do I get responses like the following?
{'response_text': 'There is a large tree and a sidewalk in the
image, but no vehicles or people.'}
or
{'response_text': '**Driveway Observations**\nHere is a
concise list of the people and vehicles present in the driveway:\n\n* 3
blue trash cans\n* A large tree\n* 1 pole\n* 2 horseshoe racks\n* A brown
tarp covering the end of the driveway\n\nThere are no people visible in
the image.'}
Hello everyone,
I’m experiencing an issue with the llmvision.stream_analyzer service in Home Assistant, where I keep encountering the following error:
"Script requires 'response_variable' for response data for service call llmvision.stream_analyzer"
I’ve tried various configurations, including both scripts and automations, to debug and resolve the problem, but the error persists. Despite explicitly including the response_variable field, the service fails to recognize it. Here’s an example of my current automation:
alias: LLMVision Test Automation
description: Test LLMVision integration with response_variable.
triggers:
- entity_id:
- binary_sensor.north_g5_ptz_vehicle_detected
- binary_sensor.north_g5_ptz_person_detected
- binary_sensor.north_g5_ptz_animal_detected
to: "on"
trigger: state
conditions: []
actions:
- variables:
llmvision_response: null
- metadata: {}
data:
duration: 10
max_frames: 3
include_filename: true
target_width: 1280
max_tokens: 100
temperature: 0.2
expose_images: true
provider: "XXXXXXX"
message: >-
Here are three images from a security camera that just detected motion.
Provide a short and concise description of what caused the motion alert.
image_entity:
- camera.north_g5_ptz_high_resolution_channel
response_variable: llmvision_response
action: llmvision.stream_analyzer
- target:
entity_id: input_text.debug_response
data:
value: "{{ llmvision_response.response_text }}"
action: input_text.set_value
- data:
level: info
message: "LLMVision Raw Response: {{ llmvision_response }}"
action: system_log.write
- data:
message: "LLMVision result: {{ llmvision_response.response_text }}"
title: Motion Analysis
action: notify.notify
mode: single
What I’ve Tried
- Variable Initialization:
• Declared llmvision_response using the variables block and initialized it to null.
- Renamed response_variable:
• Changed response to llmvision_response in case response was a reserved word.
- Simplified Service Calls:
• Tested minimal configurations using only required fields in Developer Tools → Services.
- Logs:
• Enabled debug logging for the llmvision component but found no additional information.
- Camera Integration:
• Confirmed the camera entity (camera.north_g5_ptz_high_resolution_channel) works properly in Home Assistant.
- Direct Testing:
• Attempted to call the service in a script, with the same error.
What’s truly strange is if I just build the action in Developer Tools → Actions, it works perfectly. I just cannot for the life of me get it to work in an automation. I’m using Google Gemini if that matters at all. I’m sure this is just something dumb that I’m overlooking.
Welcome to Home Assistant! Your automation looks fine from what I can see. Could you share a trace? You can find it by clicking the three dots then “Trace”. Maybe it helps to debug your automation.