LLM Vision: Let Home Assistant see!

i keep getting this error 500.
i can press run again and it works, but still get this error every now and again

Executed: 21 October 2024 at 09:15:45
Error: Fetch failed with status code 500
Result:
params:
domain: llmvision
service: stream_analyzer
service_data:
interval: 2
duration: 1
include_filename: false
target_width: 1280
detail: high
max_tokens: 100
temperature: 0.1
provider: 01JAP03NNG4W9J7Y7J6QP7QSD4
model: gpt-4o-mini
message: describe what you see in a few words
image_entity:
- camera.camera1
target: {}
running_script: false

did you get all this set up? have no clue what im doing and just want to display the response to an automation on a card

The error occurs when trying to fetch the latest frame recorded by the camera.
Here’s something you can try: In Home Assistant go to developer tools > States. Search for your camera and check its attributes. It should have entity_picture. Copy this and append to your Home Assistant url (e.g. http://homeassistant.local:8123/api/camera.entity_id?token=<accesstoken>

This should show you the latest camera frame in your browser. Try refreshing a few times. Will this also produce a 500 error?

ok yeah, so its only on the first time it tries to basically load. i can keep refreshing and its fine but if i leave a few seconds i get the error 500 again

This means Home Assistant doesn’t keep the stream running and it takes too long to load it. You can force the camera stream to stay active in the background. This will however take up some resources on the server. Here’s how you can do that:

  1. Find your camera entity in Settings Devices & services > Entities
  2. In the camera preview, click the settings cog
  3. There should be a setting called ‘preload camera stream’
  4. Check your CPU usage

For more information, see this: Camera - Home Assistant

Update: This should be fixed in the next version. If a request fails it will try again.

With v1.2.1 the blueprint has gotten some big upgrades:
AI understands what happens in the video, decides whether you should be notified and sends you notifications with a preview and summary of what happened.
Using LLM Vision has never been easier!

notification
Check out the post in Blueprint Exchange.

1 Like

I’m trying to configure the llm vision component with Anthropic. I’m getting an invalid key error during setup. The key starts with “sk-ant-api03…”. I’m on the evaluation plan. Do I have to purchase a plan to use this?

Just realized that my remaining balance is 0.00. But the key should still work or not?

If I remember correctly you need to add some funds. I think you can get $5 for free though if you confirm your phone number.

I added some funds and the key was accepted.
Thanks

1 Like

Thanks for this project. I’ve been having some fun with it.

I have an automation where at night when the alarm is armed it would wake me up if a person was detected on the driveway. However this was always a pain as rain or even reflections would trigger it even though I have blueiris setup nicely. Could never get it to work properly.

Now I have it setup so that if blueiris thinks there is a person, it will ping the llm to double check before waking me up to let me know. Works 100% more reliably.

3 Likes

I had the same idea. I just implemented this using Frigate and LLM Vision.

In case others have Ring cameras and use this, some notes on its use (having just worked through it all with @valentinfrlch)

TL;DR: to get Ring + LLM Vision working together use an action in an automation to save a short video clip in a local folder then point LLM Vision at it. I always use the same file name so it’s easy then to point the LLM Vision action at the file

First of all, you need to have Ring cameras and the Ring native HA plugin and Ring-MQTT HACS addon here installed. You need to have streaming video configured using Generic Camera(s) as per instructions here and that all needs to be working. If it’s not, I’m sure @valentinfrlch will agree that this thread isn’t the right place, ask in the dedicated thread for it here

You also need to have rights to save files in your home assistant file store somewhere. If you choose anything but the config folder you may need to permit writing to those folders. How this is done varies depending on how you run it but start here

Now to the point of the post.

I have got LLM Vision working with Ring cameras but the fact that they are not typical CCTV cameras and thus don’t go through a system such as Frigate presents some nuances to how compatible certain aspects of LLM Vision are with them. In short:

:x: Images
:white_check_mark: Video
:x: Streaming

The problems come from Ring not LLM Vision because of how Ring itself works. Detail below, but in summary (so far as I can tell) there is no way to get Ring to snapshot images on demand, which means that the snapshot image is useless for realtime analysis like this

Streaming can’t ever work because LLM Vision relies on taking several frames from the video using entity_image attribute of the camera, rather than processing a stream of video. But the Generic Camera entity doesn’t quite work like that when created to host the ring-mqtt rtsp stream. As per the instructions you set the live stream to point at the rtsp stream exposed by the ring-mqtt addon but the snapshot attribute is pointed at the offical ring addon snapshot attribute.

And hence images doesn’t work either, because the snapshot on ring is not related to motion, it’s configured in the ring app as an image taken every x minutes regardless of activity. The camera.snapshot action has no effect as ring doesn’t have a “take snapshot” function within it (and equally therefore entity_image that LLM Vision Streaming function requires has the same limitation)

Some other pointers:

  • You could call past recorded events not live stream if you want by changing the path you query from ring - see ring-mqtt docs
  • Be aware that it can take a few seconds for ring video to start to live stream from the point the stream is ‘viewed’ (including using actions like camera.record). Part of this is delay in connecting to ring live streaming, some is buffering within HA itself. You can tell streaming to start using an action, which might shave a bit off that time
  • Within Ring any HA live camera activity is shown as live viewing and is recorded like any other ring activity. This means it is subject to the same limitations as the Ring app in terms of duration.
  • Beware if you have a battery Ring camera (or you are precious about your bandwidth) as once you start a live stream it may not always reliably stop (the forums are full of examples, but it’s unclear why it happens). You can issue a stop live stream action in HA which helps prevent this happening.
  • If you want to include an image in any notification following LLM Vision being run, it’s not included in the response variable LLM Vision returns. Normally you would include a snapshot for anything else but Ring, however since Ring snapshots are periodic it’s not going to have any relevance to what was submitted for analysis by LLM Vision. As of writing I don’t know a workaround however I believe @valentinfrlch is potentially going to help Ring users as a by-product of including a debug function that can write the first submitted frame to a folder - which Ring users can use as a workaround for this problem as that frame can be our notification image we can’t otherwise get.
2 Likes

I’m using LLM Vision + gpt4o-mini to identify how my 3D printing is going, works great!

For another project, I want to send a history graph for a sensor value (weight of person in bed during sleep, indicating sleep “quality” and movements). Do anyone know if it possible to grab a graph/history graph for a sensor and feed it to llmvision.image_analyzer?
edit: I solved this using my existing Grafana setup, where I can easily download a graph as PNG: Generating PNG images from a Grafana chart – Correct URL, Settings and Authentication | j3t.ch | Julien Perrochet's Blog

//Johan

1 Like

v1.3 Event Memory

Have you ever wanted to ask your smart home whether your package has been delivered or the garbage has been collected yet? LLM Vision can now do just that!

v1.3 can remember events, so you can ask about them later. You need to set up ‘Event Calendar’ in the integration settings.
After you’ve completed the set up, you will see a calendar entity. This not only gives you a visual overview in the calendar view of your dashboard but it also allows for integration with other services.

To learn how to set this up, see this page in the docs:
https://llm-vision.gitbook.io/getting-started/asking-about-events

Awesome addon I must say. I’m using this blueprint for now to get going: AI Event Notifications. (https://llm-vision.gitbook.io/examples/examples/automations)

So basically it works, but first I get an event with a picture when motion was detected. After a while I get an updated notification with an updated AI message about the scene. But the picture also gets updated. Which means the person could already be out of view from the camera, so I basically get an empty picture without a person in it.

Is there a way to keep the original picture from the moment it detects motion?




Glad you find this useful!
v1.3 should solve that (just got released, you should see an update in HACS soon). In the blueprint you can now choose between a live preview (what you have right now) and ‘snapshot’.
You will have to re-import the blueprint to update it. (This will keep your automations)

Check the discussion of the blueprint here: Camera, Frigate: Intelligent AI-powered notifications - #53 by valentinfrlch

Thanks for sharing this!
I am working on a data_analyzer that will take a graph or other ‘visual data’ as input along with a sensor and update it’s value.
Would you mind sharing your workflow? That would be helpful to improve the action.
Thanks!

1 Like

I have the update. Great work!

One final issue. Is it possible to send the notification to multiple recipients? Now I can only choose my phone.

And another issue with the new blueprint. When I select snapshot, I get no picture. But the image is processed and I do get an AI response.

Sure! My setup is most probably more complicated than it needs to be. I just got all of it working and have not optimized the flow. Suggestions are welcome :). This is how it works:

  1. I run a separate docker container with a python script on my Home Assistant machine. This container provide an HTTPS endpoint (using Flask) that generates a PNG for a specific Grafana dashboard / panel. The image is returned in the response. There is also another endpoint that will return the last generated image (reading from disk). Generating the PNG using the Grafana API can take quite some time depending on requested size. In my case, more than 10 seconds. Grafana API uses the Authorization Bearer header for the API key.
  2. In Home Assistant, when I want to analyze a graph, I first use the RESTful Command (with a custom timeout, 30s) to trigger the PNG graph generation. This is because the Downloader integration will timeout after 10s and does not support setting a custom timeout.
  3. Once the REST command returns, I use the Downloader to download the cached image provided by the python/docker container.
  4. Now with an image on disk accessible by Home Assistant, I use llmvision.image_analyzer with the image_file: /config/downloader/grafana-fetch/{{ trigger.event.data.filename }} to generate a description of the graph.

//Johan