As a quick follow up question, i’d also love to know if there is a way to attach the image to the event in the event calendar so that it isn’t just the LLM response without the contextual image.
Haven’t found a way to attach images directly to calendar events unfortunately. It might be possible to store the images separately and include a reference to the image in the calendar event though.
Thought the same thing. Less than ideal, but it is what it is for now.
It should technically be possible to use Ollama with Extended OpenAI Conversation as Ollama has OpenAI compatible endpoints. I can’t get it to use tools properly however. If someone figures that out, please feel free to share!
I know that Qwen2.5 has tools support. I’ll give it a try and see what happens. What i’m stuck on, however, is the spec portion. That doesn’t exist for the Ollama integration. Any ideas on how to specify that?
any luck with this?
Did anyone ever fix the truncating of the text?
I’m trying to use data analyzer and doing the following in an action but keep getting the error “unsupported sensor entity type”… what am I doing wrong here. Sorry for the obtuse question, pretty new to HA
include_filename: true
target_width: 1280
max_tokens: 5
temperature: 0.1
provider: xxxxxx
message: how many packages are at the door?
image_file: /media/Snapshot.jpg
sensor_entity: input_text.last_package_delivery
remember: true
Hi, I created a code on HACS that will allow you to convert jpg to png.
hello,
wondering if the blueprint can be used with global conditions, for example, in some cases I don’t want the LLM to analyze photos if someone is home, so if a condition that only if someone is “not home” to run as normal.
I thought of making an automation just to turn on/off the automation based on this, and that could be ok.
Thanks
Hi, does im the only one where i cant get analyze video by ollama with the blueprint ? i tried the lastest version 1.3.7 and now the automation never trigger. I try with adding Zone, object and also try without and nothing trigger.
Is there anyway I could get the image analyser to work on a robot vacuum image entity? I’m able to select the entity in the developer tools but I just get an error when I try Failed to perform the action llmvision.image_analyzer. Unknown error
. I know it’s all setup correctly as the Stream analyser has no problems responding to the camera feeds.
action: llmvision.image_analyzer
data:
include_filename: false
target_width: 1280
max_tokens: 100
temperature: 0.2
generate_title: false
expose_images: false
expose_images_persist: false
message: What can you tell me about this image?
image_entity:
- image.vac_map
I’m getting notifications very late if my phone is laying on my desk - only when I pick it up, 10-20 minutes later, do they arrive.
Is there a way to add ttl
and priority
to the notification data?
service: notify.mobile_app_*
data:
message: Test
data:
ttl: 0
priority: high
Source: Mobile App Notifications Are Slow (Android) - #51 by dshokouhi
your example is correct, make sure the app also has access to run freely in the background as well. Battery optimization and other power settings may be at play too.
Pixel 8 Pro, all battery optimizations are off, all other settings I’ve found, including Persistent has been enabled.
Most notifications seem OK, but the one from this Blueprint is consistently late. That’s why I’m hoping there’s a way to add ttl and priority to it so I don’t have to “Take Control” and lose the capability of getting updates.
Mine didn’t trigger as well with frigate on the same version. Not sure if it’s a bug or it’s just not supported yet but removing the template from the enable key in the mqtt trigger started the automation firing. I wasn’t using Camera anyway so I just swapped the template with “true”. I had other issues with it not showing the image in the notification though so I didn’t continue with it.
Have this same question. Not sure how to set the automation so it runs on motion detection
Anyone else’s notifications just stopped. Its been running fine for a while fine all of a sudden nothing it doesn’t trigger. Running frigate mode LLM Vision v1.3.7 did i miss a change?
My blueprint based .yaml is as below. It just does not work unfortunately:
alias: AI Event Summary Side
description: ""
use_blueprint:
path: valentinfrlch/event_summary.yaml
input:
input_mode: Camera
remember: true
notify_device:
- 1864308c42629f5750e2022c3f6b6ff7
camera_entities:
- camera.side_fluent
motion_sensors:
- binary_sensor.side_motion
cooldown: 5
provider: 01JGYTZWQ1ENRF45ES7XQF1BP6
max_tokens: 50
model: gemini-1.5-flash
I get this error when runnning manually:
Logger: home assistant.components.automation.ai_event_summary_side
Source: components/automation/init.py:664
integration: Automation (documentation, issues)
First occurred: 2:17:21 PM (5 occurrences)
Last logged: 2:37:55 PM
Error rendering variables: UndefinedError: ‘dict object’ has no attribute ‘entity_id’
I did not, but I did successfully set up a dual notification system - one with text, one with the image and a link to the camera. I did not use the blueprint so it may look a little different.
alias: AI Image Analysis - Driveway (Rick Sanchez)
description: >-
Detect person or animal, analyze with LLM Vision, and send notifications as
Rick Sanchez
triggers:
- entity_id:
- binary_sensor.driveway_person
- binary_sensor.driveway_animal
to: "on"
trigger: state
actions:
- delay:
seconds: 1
- data:
remember: true
include_filename: true
target_width: 1280
max_tokens: 300
temperature: 0.5
generate_title: true
expose_images: true
expose_images_persist: true
provider: 01JGYEKA641GKVAY7FVV2V6F09
model: gemini-1.5-flash
message: >
You are Rick Sanchez from the cartoon TV show "Rick and Morty." You have
a part time job as a security camera monitor for this smart home, you
will be asked to give brief summaries of what you see on camera. You use
the paychecks from this job to fund your intergalactic adventures, and
you are annoyed that this is necessary. Two adults named Matt and
Tatiana who are 41 years old and are fans of the "Rick and Morty" TV
show are your audience, so you can be as vulgar or profane as the real
Rick Sanchez would be without worrying about children overhearing your
responses or anyone being offended. Be funny. You are encouraged to use
sarcastic jokes and brag about your intelligence. Since your responses
will be part of a phone notification, please use words and not emojis
and keep the length of your report limited to 255 characters. Summarize
who is on camera or what is happening in the camera feed, but do not
describe the scene! If nothing is happening, say so in a funny way.
image_entity:
- camera.driveway_fluent
response_variable: ai_analysis
action: llmvision.image_analyzer
- data:
message: Driveway Camera
data:
image: "{{ state_attr('camera.driveway_fluent', 'entity_picture') }}"
actions:
- action: URI
title: Open Camera Feed
uri: /lovelace/cameras2
priority: high
ttl: 0
action: notify.all_phones
- data:
title: "{{ ai_analysis.title }}"
message: "{{ ai_analysis.response_text }}"
data:
priority: high
ttl: 0
action: notify.all_phones
- delay:
seconds: 120
mode: single
max_exceeded: silent
I’m on my lunch break at work at the moment, so I had time to paste you the code but not time to fully explain it. Ping me if you need me to later and I’ll try to help you out with it