Hello. I wonder how to use response as automation condition. AI analyzer prompt is: ‘If there are no humans, animals, vehicles, and any moving objets, respond with only one word “ignore”.’
And if feedback from AI is “ignore” stop further actions.
Hello. I wonder how to use response as automation condition. AI analyzer prompt is: ‘If there are no humans, animals, vehicles, and any moving objets, respond with only one word “ignore”.’
And if feedback from AI is “ignore” stop further actions.
The create 'Timeline' config entry first
error refers to the LLM Vision Timeline provider. Learn how to set it up here: Timeline | LLM Vision | Getting Started
@Bobbik_the_Pit_Droid image
, video
and stream
analyzers return a response variable when called. The available keys in the response object are explained here: Usage | LLM Vision | Getting Started
like this?
{% if states('input_text.input_text_test') == "“ignore”" %}
{% endif %}
Is it possible to filter what is added by remember to timeline? I would like to filter out the “no motion observed” so my calendar and LLM vision card fo not get spammed like they do now.
And I am looking for a way to translate the title. For the description I can just translate the print and get the output translated. But there is nothing for the title it looks like.
Thank you so much. This is an incredible integration… mad respect… I’m almost there, just waiting for the next hapless passer-by
Not directly. You could create a script in which you call the analyzer, implement some custom logic based on the output, and then use the separate llmvision.remember
action to add a custom event.
@Bobbik_the_Pit_Droid The analyzers don’t store their response in an entity (unless you’re using the data_analyzer
). The proper way to check if a word is in the response is like this:
{{'ignore' in response.response_text}}
This will return a boolean, so you can use it for subsequent if
or choose
blocks. You’ll also need to set response_variable
to "response"
Any way of showing 24hr format in the timeline card?
I tried using the image analyser and use record. Events getting added to timeline but not with image. I have enabled expose image to use it in notifications, that works great. But no idea why the images are not available in timeline items.
The html seem not having any image srouce.
How can I setup memory? I am on 1.4.1 but cant find it. Tried reconfigure as well.
And a question for Ollama users. I have latest Ollama docker container. When I setup Ollama integration in Home assistant I cant find Gemma3 in the list. What to do?
EDIT: Nevermind, found it!
@tekno-yanqui You can increase max_tokens
. This parameter sets how many tokens the model generates.
@chris4824 Good idea! I’ll add an option for that in the next version.
@skycryer Should be fixed in v1.4.1
.
Maximum tokens are 100. If one is running local ai would it be possible to define a higher range or disable tokens?
Hi all,
I have LLM Vision set up and working, sort of, the response I am getting isn’t that great.
Is there a better model I could use?
Currently I am using llama3.2 would llama vision, gpt4 or even Google be better?
I am using Ollama local as well with llama3.2. I would like to try something else as well since it is not doing supergreat. I thought latest Gemma3 was vision as well but cant choose it in the integration. Any suggestions regarding models?
Hi,
Is there a way when using the blueprint (currently on 1.41) to not "remember’ anything that comes back with No activity detected/observed? For whatever reason that’s what I get back 9 out of 10 times, tried both frigate cameras and then the cameras themselves with OpenAI and Gemini. I don’t have notifications setup, just using it so I can use the timeline card as I do all my notifications via frigate. Current BP config is:
alias: LLM Vision Events
description: ""
use_blueprint:
path: valentinfrlch/event_summary.yaml
input:
remember: true
camera_entities:
- camera.chickens_high
- camera.field_high
- camera.gate_high
- camera.g5_bullet_high_resolution_channel_3
- camera.g5_bullet_high_resolution_channel
- camera.g5_bullet_high_resolution_channel_2
- camera.game_room_high
- camera.school_room_high
provider: 01JK7J0CEX9DWS1SX0C2MY755N
model: gemini-2.0-flash
important: false
use_memory: true
cooldown: 1
max_frames: 10
duration: 10
max_tokens: 40
@Kelvin, @M203
There was a bug with gemma3 in ollama but it should be fixed in Ollama 0.6.2
. Gemma3 is much better than llama3.2-vision in my opinion.
Not sure, but you might be able to set a higher value of max_tokens
when using yaml mode instead of UI. The maximum of 100 is only in the blueprint. I’ll increase the max in the blueprint.
Yes, I am running latest ollama but when doing the ollama integration in homeassistant I cant choose Gemma3. It isnt avalable in the list. Only Gemma, Gemma2, codegemma and shieldgemma are present in the list.
Ollama, Gemma3 14b work great with V1.4.1.
However, somehow from y’day my LLM Vision is processing image from previous motion alert.
How do I use Gemma3 14b? I have the latest Ollama in docker. When I do the Ollama integration in home assistant I cant find gemma3 in the list. Should I do something with Ollama docker?
Seconding this post. I’m scratching my head to figure out how to add the TTL: 0 and Priority: high to the blueprint for this. Anyone have recommendations? I’m semi newish to HA and my understanding is you add these to the data section, however the blueprint does not have this.