Seems like nobody has ever gotten this part working as nobody has ever answered this question. I’m in the same boat.
I also am in the same boat, no images are created in that folder @valentinfrlch
See some errors below.
2024-12-22 01:08:38.407 WARNING (MainThread) [custom_components.llmvision.request_handlers] Couldn’t fetch frame (status code: 404)
2024-12-22 01:08:39.419 WARNING (MainThread) [custom_components.llmvision.request_handlers] Couldn’t fetch frame (status code: 404)
2024-12-22 01:08:40.423 WARNING (MainThread) [custom_components.llmvision.request_handlers] Failed to fetch http://192.168.1.192:8123/api/frigate/notifications/1734847696.293872-hif9nn/clip.mp4 after 2 retries
2024-12-22 01:08:40.423 ERROR (MainThread) [homeassistant.components.automation.camera_ai] Camera AI: Analyze event: choice 1: Error executing script. Error for call_service at pos 1: Failed to fetch frigate clip 1734847696.293872-hif9nn
2024-12-22 01:08:40.423 ERROR (MainThread) [homeassistant.components.automation.camera_ai] Camera AI: Error executing script. Error for choose at pos 4: Failed to fetch frigate clip 1734847696.293872-hif9nn
2024-12-22 01:08:40.425 ERROR (MainThread) [homeassistant.components.automation.camera_ai] Error while executing automation automation.camera_ai: Failed to fetch frigate clip 1734847696.293872-hif9nn
Do you have the Frigate integration installed? The summary cannot be generated because the clip cannot be fetched.
Yes of course.
@valentinfrlch I think you need to include a way to specify where the clips and images are stored in both camera and frigate mode. There are dozens of people with the same issue here and its all related to where these images are stored and not stored and what url is used. Especially for external access.
I wrote another automation to detect packages at the front door with your LLMvision and AI, and I specified the location for the images to be stored that is in the path where it is accessible via the HA url as opposed to /media/frigiate or where it stores it now.
- Failed to fetch http://192.168.1.192:8123/api/frigate/notifications/1734847696.293872-hif9nn/clip.mp4 after 2 retries
- Failed to fetch http://192.168.1.192:8123/api/frigate/notifications/1734964359.820529-o88p5t/clip.mp4 after 2 retries
- Failed to fetch http://192.168.1.192:8123/api/frigate/notifications/1734966869.65704-fmy3oc/clip.mp4 after 2 retries
There are no clips in that directory with those names /media/frigate/clips it has no clips, however it does have /media/frigate/clips/previews/Front_Door.
Clips are not stored on the HA instance. They are fetched from Frigate directly through the API the Frigate integration provides. Usually this error occurs when the clip hasn’t been saved to the disk yet. There is a currently PR to increase the retries which should help in this case. If you follow the url in the error logs on a PC connected to the same network, do you see the clip?
I am running frigate addon locally on the same server. I see clips here.
/media/frigate/clips/previews/Front_Door
➜ Front_Door ls
1734984000.013309-1734987600.0912.mp4 1734987600.0912-1734991200.039924.mp4
like this, but this isn’t translating into anything that produces an image or clip via the api.
Finally got blueprint working but haw can I use the remember events without using extended openai
v1.3.5 of the blueprint (and LLM Vision) is out!
The blueprint now uses AI generated titles for events if remember
is enabled.
v1.3.5 also finally fixes a bug that prevented users with special characters in their phone’s name to use the blueprint.
Check out the full changelog here: v1.3.5 Release notes.
To update the blueprint you’ll have to re-import it! Also make sure to update LLM Vision first.
Happy New Year, everyone!
Consider working discord in as a notification. I took control and added an action myself, but it’s cloogy and I’d prefer to keep using your blueprint. Thanks for all your work!
action: notify.hass (discord)
data:
message: “{{response.response_text}}”
target:
- “discord channel”
data:
images:
- /config/www/llmvision/front_porch_profile000_mainstream_0.jpg
- /config/www/llmvision/front_porch_profile000_mainstream_1.jpg
- /config/www/llmvision/front_porch_profile000_mainstream_2.jpg
- /config/www/llmvision/front_porch_profile000_mainstream_3.jpg
I’m using Ollama locally. I can see everything is running. I can see Ollama processing. However I only get either the name of the camera or sometimes I get 'Doorbell Seen" as the text of the description. I also get a second notification that says “Image attachment issue” even thought I do not have that option checked.
Thanks for the blueprint.
I’m using the v1.3.5 with Groq LLM and frigate, all settings are default values.
I could only get “Person seen” with a image in the notification.
Here is the HA error logs:
Reolink Chime AI: Analyze event: choice 1: Error executing script. Error for call_service at pos 1: Error: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 4915200 and the array at index 1 has size 3686400
Reolink Chime AI: Error executing script. Error for choose at pos 4: Error: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 4915200 and the array at index 1 has size 3686400
Error while executing automation automation.reolink_chime_ai: Error: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 4915200 and the array at index 1 has size 3686400
Also checked the automation trace and it looks like the image data has never been sent to LLM vision at all.
I have a separate frigate docker instance running outside HA.
Love this blueprint works great. Is there anyway of having the notification sent to pushover?
Thanks
Seems like it’s related to this issue.
I configured Anthropic Claude as the provder and want to specifiy the model but cant get it to work. Leaving it blank uses gpt model.
Error: not_found_error: model: claude3.5-sonnet
I exactly used the model name specified in the setup instructions.
I fixed it by using: claude-3-haiku-20240307
Any tips on how to improve analysis of the importance level? I receive almast all motion detected notifications although Important is switched ON
Thank you for the blueprint!
My iOS notifications seem to be working correctly, but the generated text appears to be truncated after 2-3 lines.
Has anyone else had the same issue? Is there a character limit?
I just started playing with this. I am a relatively inexperienced with HA.
Is there a way to test this? Or do I have to force my cat or dog out to trigger it?
Thanks.
Very much a newbie, so apologies if this comes out silly.
I managed to setup Gemini and use your blueprint trying to get the notifications from my cameras. It seems that the notifications are going through okay, but the AI processing is failing? Can I get some pointers on what to look at?
Here is the failing trace, it’s an unknown error so I’m lost!