🌦 Weather forecast access for LLM (eg ChatGPT or Gemini)

Introduction

Since some time Large Language Models like OpenAI/ChatGPT or Gemini can be used for voice commands, they can even control your house now. 2024.12 will even bring an option to prefer local processing of the command first, so the LLM will act as a fallback. This opens a whole new world of possibilities.

If you expose a script to Assist, a LLM can access and use that script. Using fields you can guide the LLM how to use the script. Using this I created a script which gathers weather forecast data from a weather entity (using the weather.get_forecasts action) and then feeds that data to the LLM, so it will give a nice summary.

How to use it

  1. Use the button to import the blueprint Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.
  2. Create a script using the Blueprint, select the weather entity you want to use. Note that this weather entity has to support both hourly and daily forecasts for the script to work properly. The default weather integration provided by Home Assistant (Met.no) provides hourly and daily forecasts.
  3. Save the script. IMPORTANT: The LLM will use the description of the script to determine how to use it. So it is important to provide a good description. I use Fetches the weather forecast for either a part of a day, or one or more full days. In case the weather for the weekend is requested, this means Saturday and Sunday
  4. : IMPORTANT: Expose the script to Assist

That’s it, you can now ask an LLM voice agent for the weather forecast, and it it will tell you the forecast (provided the data is available in your weather entity).

Thanks to:

@jenova70 for providing the groundwork for the calendar script on which this script is also based

7 Likes

I removed the LLM instructions I initially added. Changed the setup so the data which sent to LLM will now be an average over the entire requested period.

Thanks for this. Works great!

Hi! Cool automation. I seem to have an odd issue when I try to use this.

The script was configured as described and I used the same description and weather provider. I checked and it seems to return the correct output when I use it in the developer menu. However when I ask gemini what the weather is for this weekend or an upcoming day, I get the following errors:

This corresponds to the following error in my logs:

This suggests it might not be filling out both inputs:

Hmm, yes indeed, I can build something in for that.
Let me think about it.

I’ve updated the blueprint :slight_smile:

@TheFes I just tried it out and so far pretty good!

When I requested “what’s the weather this weekend” (good catch specifying the weekend days by the way) it returned this:

The weather forecast for this weekend (December 21-22, 2024) is as follows:

- **Condition:** Cloudy
- **Temperature:** High of 14°F, Low of -2°F
- **Wind Speed:** 12 mph
- **Wind Bearing:** 138°
- **Humidity:** 84%
- **UV Index:** 2
- **Precipitation:** 0 (no expected precipitation)

So Assist actaully ready out the asterisks. Do you handle that in your own conversation agent initial prompt?

Yes, I changed the prompt of the LLM integration itself and told it to avoid special characters

Oh nice. Can you share your modifications? I tried this but it still included special characters and once told me the measurements in Metric (though I’m in the USA and it is Imperial)

This is the end of my the default OpenAI Conversation Agent initial prompt:

Answer the user’s questions about the world truthfully. You are in the United States. Assume all units of measure are Imperial. When responding do not include asterisk or other punctuation (besides comma and period) as this will be read out loud to the user.

This is what I use

You are a voice assistant for Home Assistant.
Imagine you’re in a good mood. Respond honestly, short and to the point. Queries will be mostly in Dutch.

Do not use emoji in the response, prevent usage of special characters like “*” in responses as the responses are used in voice responses.

for me the llm keeps saying “i’ve no access to weather information”

Did you expose the script to Assist?

yes i do, i also put in the description. but for me it doesn’t work

Is the script triggered when you ask the LLM for the weather, and if so, and errors in the trace?

the script is not triggered if i ask for weather information

if i run the script manual this errors shows up.
i also tried diffrent weather services like:

  • buienradar
  • accuweather
  • met.no

Did you provide the start (and optionally end) of the forecast period when running it manually?

yes i did,
if i run from developer tools and give it a start and end time then i get a rsponse

Do you use Gemini as LLM? I also have some issues then.
It does use the script, but it doesn’t use the proper input for the selectors.

yes i use gemini

Are you using the default recommended settings in the config?
I tried with some different models, and it really differs per model if it uses the script or or not.
With the recommended settings it works for me. I did add some additional instructions to the selector description which hopefully makes it use the right input for the selectors. So you might want to update the blueprint as well.