Generative AI camera snapshot notification

I’ve created a camera snapshot notification blueprint based around code-snippets/README.md at main · AdamGit69/code-snippets · GitHub

It allows you to select the following options:

  • Camera
  • Motion entity
  • Device to send a notification to
  • Number of snapshots to take
  • Change the AI prompt

It then uses Google Generative AI to comment on what changes its observed between the snapshot with 500ms delay between each image.

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.

image

2 Likes

Hi Andy McInnes,

Very nice design, sir.
2 things however.
One, people will be wanting the fancy my-link thing from you to load this. Please consider setting this up for them.
Create a link – My Home Assistant.
Two, I would be interested in seeing if this would work with Ollama locally as opposed to going to the cloud. Do you have the ability to test/code that?
I would consider trying to do that but you have the original and expertise. I would have to use yours and get it running and understand it before I could plug my Ollama instance in and see how it does.

1 Like

I haven’t tried anything with Ollama yet, but it would be interesting to have completely local, not least to mention I’d like to do a lot of AI detection and I’m pretty sure I’ll soon hit the free usage cap. Do you have any details on a Ollama setup in HA?

I personally set Ollama up on a Linux machine with a GPU. Lots of ways to do that, need to search to fit your hardware.
Then there id the Ollama integration to make an assistant.
Ollama - Home Assistant.

I am currently using it as a local ChatGPT replacement to stay local, not using the HA connection (yet). Haven’t had the time to spend to figure it out with Summer going on.

I’ve found an Ollama addon I’ve installed, you wouldn’t know which model would be well suited for generative image analysis and maybe if such a thing exists, also good for home assistant in general?

There is the addon, but then the HA server has to have the big video card (GPU). Personally I just loaded Ollama on a gaming machine that sits idle most of the time and then the Ollama integration I mentioned can talk to that and be an assistant.

I’ve tried a few models llama3 and llava-llama3 the first just kept timing out and the latter gave me this:

“Sorry, I had a problem talking to the Ollama server: llava-llama3:latest does not support tools”

My home assistant box is only a mini itx with an i5 integrated graphics and 16gb ram, so it’s not really got the grunt I think these models need.

Integrated Graphics are going to struggle. It needs a GPU.

When I do trace, I get error at the AI step: “Error: Error generating content: 404 Gemini 1.0 Pro Vision has been deprecated on July 12, 2024. Consider switching to different model, for example gemini-1.5-flash.”

I tried with the same free tier account in a browser and I do have access with genini-1.5-flash.

I have set model as “models/gemini-1.5-flash-latest” in Google Generative AI Conversation and has reloaded the integration several times. Is there anywhere else that I need to update?

I struggle with the documentation file regarding the versions and updates, would be great if there was only one off readme file.
Thank you tho.

Error: Error generating content: 404 Gemini 1.0 Pro Vision has been deprecated on July 12, 2024. Consider switching to different model, for example gemini-1.5-flash.

I am still blocked on this version mismatch thing. Everything else looks good, the snapshots are being taken, HA reloaded several times, in Gemini webpage it works great.

I tried to add model: 'models/gemini-1.5-pro-002' in the blueprint under data for service: google_generative_ai_conversation.generate_content but I don’t think that action allows that parameter. The only place I can find to change model is in the integration itself and I have already changed there.

What am I missing?