🌦 Weather forecast access for LLM (eg ChatGPT or Gemini)

that worked from the UI, without setting the end date. I still had to set the start date, which I think make sense. however it didn’t work from assist. I tried both “Whats the weather for Wednesday” and “Whats the forecast for Wednesday”. Neither are triggering the script. Aside from exposing it, is there something else that I need to be doing for it?

EDIT: I may have gotten it working somehow. I randomly asked Assist to “turn on the forecast” (forecast is the alias I gave the script). Now it returns answers when I asked it, whats the forecast [for X day, tomorrow, etc.]

EDIT 2: Nope, as soon as I end the Assist conversation and restart it, it no longer triggers the script

Works in German too, nice! Amazing work @TheFes

And proof it was real (even though OpenAI was trying to hide it)

I used this description in German:

description: >-
  Liefert die Wettervorhersage entweder fĂźr einen Teil eines Tages oder fĂźr
  einen oder mehrere ganze Tage. Falls das Wetter fĂźr das Wochenende angefordert
  wird, bedeutet dies Samstag und Sonntag.

and these fields translated in German:

fields:
  start_of_period:
    selector:
      datetime: null
    name: Start of period
    description: >-
      Beginn des Zeitraums, fĂźr den das Wetter angefordert wird. Benutze eine
      isoformat datetime Zeichenfolge.
    required: true
  end_of_period:
    selector:
      datetime: null
    name: End of period
    description: >-
      Ende des Zeitraums, fĂźr den das Wetter angefordert wird. Benutze eine
      isoformat datetime Zeichenfolge.
    required: true

I use it in Dutch myself, with all descriptions still in English. There should be no need to translate it.

Maybe try to remove the script and create it again (do a HA core restart in between just to make sure). make sure to not change any name/entity_id before testing. It might be that that messes stuff up.

Hmm tried that to no avail. Something weird is going on because I can’t even save the script from the Blueprint. It has to be added manually. Must be something with my setup/docker. Because I tried other script blueprints and those don’t save either

With some minor changes i removed the uv_index, humidity, wind_bearing.

{% for item in
      weather_data[0].keys()
        | reject('eq', 'datetime')
        | reject('eq', 'wind_bearing')
        | reject('eq', 'humidity')
        | reject('eq', 'uv_index')%}

Result looks a bit more compact like this:
image

How can i now tell the LLM to approximate the wind-speed? I just want to know if it has high winds if needed. I’ve tried modifiying the description, but this doesn’t seem to change anything.

Or should i directly program this into the above codesnippet response?

You can put it in the configuration of the integration itself.
Something like. If weather forecasts are requested, only mention wind speed if there are high winds (above xx km/h)

1 Like

This is a bit shorter

{% for item in
      weather_data[0].keys()
        | reject('in', ['datetime', 'wind_bearing', 'humidity', 'uv_index'])%}

BTW, for me it never mentions stuff like this, it always ignores this in the response. Bu it will reduce the data sent to the LLM, and with that the token usage, so that’s still an advantage :slight_smile:

1 Like

What is the name of your script entity? Mine is script.llm_forecast. should that have any bearing on it working or not with assist?

I don’t think it has affects the usage by the LLM, but mine is script.fetch_weather_forecast_data

Got this working. Seems like it was the model, using llama3.1 made it work. Only gets tomorrows forecast though

Is this script somehow extendable to ask for weather in a specific city? I would assume that the weather API is able to do that.

The script uses a weather entity, and gets the the data from that entity. If you have a weather integration for a specific city, you can use that entity, and add in the script description that it should be used only for that specific city.

Thanks for doing this! I have setup the script from the Blueprint several times. I have named the scrip the same as you did and given it the same description. I have exposed it to Assist. It runs correctly in Developer, Actions. I am running Ollam 3.2, locally. It never calls the script. I can’t figure out what I am doing wrong.

@ckhyatt Did you gave ollama access to your house, so in other words, can it turn on lights and such

I didn’t. My plan is to use the new VPE with Whisper/Piper/Rhasspy for home control and local Ollama for general knowledge queries. I am running the latest 2025.1.2 that supports local fallback. When I ask Assist a general knowledge question that it should fallback to Ollama it doesn’t fall back. Two different issues it appears. When the weather is failing I am asking Ollama directly since fallback doesn’t work. I can’t imagine what i am doing wrong. Thanks!

@ckhyatt The LLM can only access scripts if it has access to your house. The script is “part” of your house

Thanks! That makes sense although it doesn’t appear it will work with my intended design. Not anything wrong with what you have coded, of course. I changed the Ollama integration configuration to all control. Now this error is logged: Logger: homeassistant.components.script.fetch_weather_forecast_data
Source: helpers/script.py:2032
integration: Script (documentation, issues)
First occurred: 3:39:55 PM (1 occurrences)
Last logged: 3:39:55 PM

fetch_weather_forecast_data: Error executing script. Error rendering template for variables at pos 3: UndefinedError: list object has no element 0

@ckhyatt Can you check the trace and see what the LLM used for the start and end of the period?