🌦 Weather forecast access for LLM (eg ChatGPT or Gemini)

this:
entity_id: script.fetch_weather_forecast_data
state: ‘off’
attributes:
last_triggered: ‘2025-01-14T18:51:36.479219+00:00’
mode: parallel
current: 0
max: 10
friendly_name: fetch_weather_forecast_data
last_changed: ‘2025-01-14T18:51:36.486194+00:00’
last_reported: ‘2025-01-14T18:51:36.486194+00:00’
last_updated: ‘2025-01-14T18:51:36.486194+00:00’
context:
id: 01JHK2GB4Y6Z1SDCJWH1WBCHNJ
parent_id: null
user_id: cb4709a0939c45699106c329f3e36892
end_of_period: ‘2025-01-15T00:00:00Z’
start_of_period: ‘2024-12-31T00:00:00Z’
context:
id: 01JHK8NKGHYQV5XC9654JGEJ2V
parent_id: null
user_id: cb4709a0939c45699106c329f3e36892
version: 1.3
weather_entity: weather.forecast_home
start: ‘2024-12-30T19:00:00-05:00’
end: ‘2025-01-14T19:00:00-05:00’
full_days: false

What did you ask?
It tried to get the weather from 31st of December last year until tomorrow (15th of January)

What’s the forecast for tomorrow?

I just tried the same question.

Template variable error: list object has no element 0 when rendering ‘{% set weather_data = weather_response[weather_entity].forecast | selectattr(‘datetime’, ‘>=’, start) | selectattr(‘datetime’, ‘<’, end) | list %} {% set ns = namespace(combined={}) %} {% for item in weather_data[0].keys() | reject(‘eq’, ‘datetime’)%} {% set combine = weather_data | map(attribute=item) | list %} {% set combine = combine | statistical_mode if combine[0] is string else combine | average | round %} {% set ns.combined = dict(ns.combined, **{item: combine}) %} {% endfor %} {{ dict(start_of_period=start, end_of_period=end, **ns.combined) }}’

this:
entity_id: script.fetch_weather_forecast_data
state: ‘off’
attributes:
last_triggered: ‘2025-01-14T20:39:55.594924+00:00’
mode: parallel
current: 0
max: 10
friendly_name: fetch_weather_forecast_data
last_changed: ‘2025-01-14T20:39:55.598247+00:00’
last_reported: ‘2025-01-14T20:39:55.598247+00:00’
last_updated: ‘2025-01-14T20:39:55.598247+00:00’
context:
id: 01JHK8NKGHYQV5XC9654JGEJ2V
parent_id: null
user_id: cb4709a0939c45699106c329f3e36892
end_of_period: ‘2025-01-16T23:59:59’
start_of_period: ‘2025-01-15T00:00:00’
context:
id: 01JHKB8Q9PQSAG0GBJ04SXRZNB
parent_id: null
user_id: cb4709a0939c45699106c329f3e36892
version: 1.3
weather_entity: weather.forecast_home
start: ‘2025-01-15T00:00:00-05:00’
end: ‘2025-01-16T23:59:59-05:00’
full_days: true

Now it took two days, the 15th and the 16th, but that’s an improvement. Let me have a look at where the template fails

Can you try this in developer tools > actions and post the result

action: weather.get_forecasts
data:
  type: daily
target:
  entity_id: weather.forecast_home

Thanks! This really points out how challenging it is interacting programatically with an LLM even for something relatively straightforward.

Failed to perform the action action: weather.get_forecasts data: type: daily target: entity_id: weather.forecast_home. Service action: weather.get_forecasts data: type: daily target: entity_id: weather.forecast_home does not match format . for dictionary value @ data[‘sequence’][0][‘action’]. Got ‘action: weather.get_forecasts data: type: daily target: entity_id: weather.forecast_home’

Apologies. My error.

weather.forecast_home:
forecast:
- condition: sunny
datetime: “2025-01-14T17:00:00+00:00”
wind_bearing: 257
uv_index: 0.4
temperature: 40
templow: 33
wind_speed: 10.75
precipitation: 0
humidity: 34
- condition: sunny
datetime: “2025-01-15T17:00:00+00:00”
wind_bearing: 322.2
uv_index: 2.7
temperature: 34
templow: 19
wind_speed: 15.22
precipitation: 0
humidity: 38
- condition: partlycloudy
datetime: “2025-01-16T17:00:00+00:00”
wind_bearing: 212.5
uv_index: 2.6
temperature: 44
templow: 21
wind_speed: 10.94
precipitation: 0
humidity: 37
- condition: sunny
datetime: “2025-01-17T17:00:00+00:00”
wind_bearing: 266.3
uv_index: 0
temperature: 44
templow: 29
wind_speed: 10.94
precipitation: 0
humidity: 48
- condition: cloudy
datetime: “2025-01-18T17:00:00+00:00”
wind_bearing: 204.7
temperature: 46
templow: 31
wind_speed: 7.15
precipitation: 0.02
humidity: 74
- condition: cloudy
datetime: “2025-01-19T17:00:00+00:00”
wind_bearing: 112.7
temperature: 52
templow: 34
wind_speed: 5.16
precipitation: 0.06
humidity: 68

Time for bed over here, I’ll have a further look tomorrow

Thanks a bunch for all your help!

Oddly, now it works today. The only thing I did was go into Assist and unexpose all the entities except for your script,

If I ask another question, following the question about the weather, it appears to think I am asking about the weather again. Weird behavior.

The temperature it returned is the forecast low.

It looks like I finally have it working as expected. In the settings for the Ollama integration I changed Max History Messages from 20 to 1. I think the LLM was remembering the prior prompts and getting confused. For my use with VPE I am unlikely to need it to maintain a conversation thread. That’s not how we use Alexa currently. All I am trying to do is replicate the basic functionality we have now, but locally as much as possible. Thanks for the work you put into this and the support!

1 Like

I have been trying to trace the Error: UndefinedError: list object has no element 0 as well as I am seeing it too.

The traces have been interesting and seem to suggest a mixture of failure parsing the input variables, and also LLM conversation memory issues. For example I just asked for the weather today at 4pm, the script was called with that information, but when parsed ran looking for completely different information:

 start_of_period: '2025-01-17T16:00:00+02:00'
 ...
 version: 1.3
 weather_entity: weather.forecast_home
 start: '2025-01-17T09:00:00-05:00'
 end: '2025-01-17T10:00:00-05:00'
 full_days: false

I’m sure part of it is processing the time zones since Home Assistant runs in UTC and so it’s going in and out of local time, but I think the model probably needs to be instructed to only take information from the current request because I was getting almost a rolling time frame when asking about single day weather that was using the dates from previous requests so create a range.

Thinking though, it might be good to on the multi day forecasts to force the request to be 00:00:00… I found in my testing that this almost always resolved the element 0 issue.

@winston
Thanks for this investigation.

I have a few ideas to improve the script and also to give more guidance to the LLM. I will tinker with it a bit.

1 Like

I’ve tested a bit, with the “recommmend settings” i don’t get any weather information but with the “gemini pro 1.5 latest” the information is always available

when I run manually the script no error shows up and I get expected result, but when I run via assist I got below error:

AttributeError: ‘NoneType’ object has no attribute ‘tzinfo’

Any where this is coming from/how to fix that?
(and my TZ is defined in HA general settings…)

strange, this seems to be linked to whether local assist is enabled or not…
Indeed, if not enabled then I don’t get this error?! (despite my tz is set in HA which should make a difference)
Can we force which assist a script should use/be exposed to?

This script can only be used by an LLM, local Assist doesn’t use it.

An LLM doesn’t make the same decision every time to ask a question, my guess is that it didn’t provide the timezone information for the datetime.

My new version will prevent such issues, just need to do some final touches

3 Likes