I just want to start off by saying i’m new to home assistant and frigate in general. Throughout the past couple days, I’ve gotten my setup working to where Frigate is recording and HomeAssistant will send me basic notifications when a person, car, dog, etc is detected.
I’m now looking into getting an AI overview to expand the setup, however it’s just simply not working. I’ve gotten all the prerequisites setup, including the LLM Vision integration with Gemini API Key. I’ve used a blueprint to setup my template, and it looks like the following:
However, when I manually run it, I get a basic notification on my phone stating:
Home Assistant
Motion detected
has detected activity
and that’s all I get. People coming up to the doorbell doesn’t trigger an alert or anything.
If it matters, the Camera Entity i’m using “Doorbell” should be a frigate entity. I believe. I can also select the Reolink doorbell camera itself, but it produces the same error.
I assume the Trigger State or perhaps Motion Sensor entries are incorrect. Trying various motion sensor entries and manually running the automation script causes it to not run at all and does nothing.
Can someone please advise? I feel like im losing my sanity here. I’ve tried various gemini models as well.
I apologize. The above has been fixed - turns out it takes a couple minutes to get the notification, and isn’t instant like i was expecting. I’m now trying to get LLMV timeline to work, it’s not saving events at all.
was there a change in the language settings?
in the past i get the text in german. but since few weeks the text is only in english, but the prompt is in german. how can i get the german text?
It probably changed when the system prompt was introduced. The system prompt applies to all calls and is useful for general information and behavior. You can change it in the “Memory” settings.
@valentinfrlch I see when I “Take Control” of the blueprint to analyze it that you are including the Apple iOS-only flag “interruption-level”.
interruption-level: >-
{{'passive' if notification_delivery=='Dynamic' else
'active'}}
Could you also add the Android equivalent ttl: and priority: tags? I don’t get notifications until I pick my phone up otherwise, if it’s sitting on my desk.
service: notify.mobile_app_*
data:
message: Test
data:
ttl: 0
priority: high
As far as I know, the way to do it is to defer the ‘remembering’ till you can evaluate the result, and then use the separate ‘remember’ action if warranted by the result.
This works for me - but I don’t use the blueprint, and instead rolled my own automation …
I was just going to ask about Notification updates, looks like you made some for Android. Is iOS coming? Specifically, I’d like the notification to persist in the Notification tray until I Dismiss them.
Thanks, incredibly useful and versatile integration here!
Having some challenges with Gemini not returning the correct response, however. In my case, it’s not that it’s failing to process (although this does occasionally happen), but more that it’s processing what looks a very clear image and giving the incorrect answer to the question of how many bins it can see (3 rather than 4!). Interestingly, when using the Google UI directly, it always seems to get the answer correct, although clearly takes longer to do so than the API response takes from LLM Vision. I switched to the flash 2.5 model and this is a little better, but is still making more mistakes than I would have thought it should given the relative simplicity of the problem I’m asking it.
Does anyone have any thoughts around tuning the parameters for Gemini such that it tries a bit harder to get the correct answer, rather than responding quickly with an incorrect one?
edit: One thing I’ve just done is increase the resolution of the sampled image from 640x480 up to 2560x1920 (the max of the camera) so we’ll see if this makes a difference…
Until December, the code below worked fine and I received a ‘reasonable’ description of the photo. I recently upgraded the analyzer and now I only get the description ‘a movement has been detected’. What could be the reason?
remember: false
include_filename: false
target_width: 1280
detail: low
max_tokens: 100
temperature: 0.2
expose_images: false
provider: 01JG3R7V7xxxxxPADFM9CXC78
model: gemini-1.5-pro
message: >-
-> Beschrijf wat je ziet in één zin. Als je een persoon ziet, beschrijf dan
hoe hij/zij eruitziet. + Wat is de actuele datum en tijd ?
image_file: /config/www/images/snapshots/snapshot_voordeur_foscam.jpg
Hi @valentinfrlch thank you for this integration. I’m trying to get it working with my Google nest cameras, however, I get a description that says the image is dark. Is there any way to make this integration work with my cameras?
As an additional note, the cameras are working. They are dark initially, but when I click on them and click play, the image shows up.