That is a known problem with Open AI 4o mini
As I havenāt found it here in the thread or the repo:
Do you have experience to share which models are better with stuff like this?
Gemini Flash would even be cheaper and a possible alternative I guess.
The large 4o model is most likely too expensive after my first tests, with 2 kids in the house using voice assistants for all music stuff so far.
(Also thanks for your work on the voice commands for Music Assistant btw. Will test them also in the next days. )
I also had trouble asking OpenAI for the days of the week. See here
I had a similar problem with using extended open ai conversation and play music.
So what i ended up with after a bit try and error was a script.
I wrote script that takes my music request and passes it on to a official open ai conversation agent to play the misc or any related command. Then i added a SPEC to call the script and pass on the request.
Same should work for the weather or any of the other blueprints.
Script
alias: Forward Music Request to ChatGPT
mode: single
fields:
request:
description: The text to pass to conversation.[YOUR AGENT HERE]
example: "Play Queen in Men Cave"
sequence:
- service: conversation.process
data:
agent_id: conversation.[YOUR AGENT HERE]
text: "{{ request }}"
response_variable: _function_result
And the spec:
- spec:
name: forward_music_request
description: >
Pass the user's music request to conversation.chatgpt
and return the local ChatGPT agent's response.
parameters:
type: object
properties:
request:
type: string
description: >
The entire user request, e.g. "Play Queen in Men Cave."
required:
- request
function:
type: script
sequence:
- service: script.forward_music_request_to_chatgpt
data:
request: "{{ request }}"
response_variable: _function_result
Thank you so much for these blueprints. Calender and wheater works great with my local ollama server. As LLM I am using mistral-small-24b if this is relevant somehow.
Thank you @TheFes for all your hard work on this!
I feel like i need to ask sone really dumb questions, because I canāt seem to get the scripts working properly.
Hereās what Iāve got:
- HA Voice PE
- Ollama integration with local server
- Currently running HA instance.
The Voice PE can handle requests for HA just fine, turn off the lights, etc.
Iāve downloaded your weather template, added my weather forecast entity, saved the script as āscript.weatherā set the script definition to the example youāve provided. Exposed the script.weather to Assist.
Created the voice pipeline, utilizing ollama as the conversation agent with a capable model, that does have permission to access HA via Assist, stt & tts are both set. Assigned this pipeline to the Voice PE.
When I ask āwhatās the weather at noon todayā I get the following: Sorry, I had a problem talking to the Ollama server: model requires more system memory (7.3 GiB) than is available (6.2 GiB) (status code: 500)
Which sounds like itās trying to run ollama locally, on the HA instance, even though the ollama instance is hosted elsewhere and working properly (in other tools).
More importantly Iām not understanding how the voice pipeline would know to call the weather.script (exposed) for all weather related requests. Thatās the glue Iām not understanding. Does the weather.script need a special alias or something?
Thanks, and sorry for the dumb question.
What brings you to that conclusion? Canāt it be that the system on which your Ollama instance runs, doesnāt have sufficient memory?
All exposed scripts are exposed to the LLM, the LLM knows scripts can be used as tools, and that scripts can have fields which can be used as settings.
Based on the script description the LLM determines if the script is useful for the voice command, and based on the descriptions for the script fields it knows how to use these fields.
I can fire up open-webui (also using the same ollama instance), or Enchanted (Mac iOS), ask a question, to the same configured model in HA, and get a response at reasonable speed. Also the server running ollama has way more than 8Gb memory, and free. Although HA does have 8Gb memory, which lead me to think it was trying to process locally.
If this is a network related issue between HA and the ollama server, thatās got nothing to do with your script, and I can troubleshoot that on my own.
Just seems odd that the model in ollama works fine everywhere else except HA. Also, not a problem with your script.
Just to clarify, and sorry for the dumb question, by description, you do mean the description field in the script, correct? Thereās not another description the pipeline is looking for?
Any advice on how to troubleshoot this- the memory message is new as of today, and yesterday the script wasnāt being called either, even with a script description in place.
Thanks for the help, and sorry for the dumb questions, just confused on how exactly the pipeline should work.
@xstrex I just fumbled through this myself. Yes, the ādescription field of the scriptā. Iāve imported the script from the blueprint, but did not give it a proper description (even though the instructions were right in my face ) and the conversation agent would not call it.
Iām using this for my shopping list. Everything seems to work very well except I canāt mark the item as completed with Assist once I bought the item. Each time I tried, it replied back with ādoneā or something familiar but turned out it added that item into the Shopping List instead with ā-ā in front of the item name.
Can you please kindly advise on what I should do or if the script is supposed to work as a way to complete the item with Assist?
Thank you and looking forward to hearing from you.
There is no support for marking items as completed in the script. I could possibly add that, Iāll have a look.
Are you using the LLM script, or the full local automation? Because for the full local automation it will be needed to have an exact match with the list item, so I expect that to be hit or miss (and probably more miss than hit).
The LLM can take the list of items, and match your input to the items. So if your shopping list is:
- apples
- cookies
- banana
- eggplant
- eggs
I expect that the LLM can do things like I purchased all the fruit from the shopping list, remove them from the list
(of course after I create a script to support that).
Thanks for the super fast reply. Iām using the Full LLM Script.
Really looking forward to the update.
Thank you for these great scripts =)
Got the forecast for met.no to be read out of the current weather reported as of when asked, but it canāt seem to read from the forecast from day to day that the integration lists. So if i ask it what weather itāll be tonight, itās unable to answer.
It always fails for me on HA 2025.2.5 (Supervisor 2025.03.0) when creating the automation or script
New automation setup failed
> Your new automation has saved, but waiting for it to setup has timed out. This could be due to errors parsing your configuration.yaml, please check the configuration in developer tools. Your automation will not be visible until this is corrected, and automations are reloaded. Changes to area, category, or labels were not saved and must be reapplied.
I already have a local voice pipeline with whisper and piper that uses ChatGPT. What should I use then?
My issue was this: New automation setup failed on 2025.1.2
Thank you, glad I wasnāt the only one. And yes, it was staring me in the face as well! My brain couldnāt understand āhow could the LLM read a script description and know what todo with itā, but thatās actually, exactly what it does.
These blueprints are absolutely awesome. Thanks and well done. I asked Nabu for a ārockfish recipe and put the ingredients on my listā and a second later I was told ācompletedā⦠!!! Kudos
Do you have a recommendation of how I can push the LLM to use the hyperlocal conditions sensors that I have while still relying on the forecast entity as usual? I have hyperlocal conditions data thanks to a neighbors weather station (temp, wind speed, etc) and I live in a microclimate that limits accuracy of larger forecasts.
It seems combing local conditions with a forecast could probably be done with the prompting, or alternatively combining my hyperlocal sensors with a normal forecast entity (weather.forecast_home). I started to try and bash together the local sensor data with the general forecast entity, but then wondered if this has already been done smoothly elsewhere or if there was a better way. Ideas welcome. Thanks
Ran into a problem with the Option 1 weather forecast automation:
The automation returns the following error:
Error: ValueError: Template error: average got invalid input '([],)' when rendering template '{{ day_part_forecast | selectattr('cloud_coverage', 'defined') | map(attribute='cloud_coverage') | list | average | default(0) | round(value_round) }}' but no default was specified
Did some debugging, looks like my weather provider (NWS) doesnāt have a cloud_coverage property and itās failing to catch that scenario. Hereās a snippet of what one of the day_part_forecast entries looks like from that provider:
day_part_forecast:
- datetime: '2025-03-17T18:00:00-04:00'
precipitation_probability: 0
condition: clear-night
wind_bearing: 315
temperature: 57
dew_point: 26
wind_speed: 15
humidity: 30
The selectattr seems like itās just returning an empty list since itās not present and then eventually failing. Is there any easy way for me to exclude properties that my forecast provider does not offer?
For this and all similar requests⦠You simply tell it.
You define the sensors. You describe them so the llm knows what they are⦠Add as much context data as you can to the aliases for your weather entities. You need to be able to ask the llm hey tell me about my hyoerlocal weather and it get it right.
Once you can do that. You tell it to PREFER it. To. Your other weather providers. Then tell the llm itās options and what your preferences are in easy plain English as if you would explain it to a 6 year old in your prompt.
Now the exact words? That depends wholly on what else is in your prompt.
Search for and read my Fridayās Party post for details if your interested.
Thank you this Script works great, The spoken Weather forecast is one of the few things I still needed Google Assistant for.
One problem: I use chatgpt for the blueprint. If I ask for the weekend weather the result lists Saturday and Sunday with weather conditions, thats great, but it puts ** before and after the dates like here:
Das Wetter am Wochenende in Hamburg wird wie folgt: - **Samstag, 29. März**: Es wird regnerisch sein, mit einer Temperatur von etwa 12 °C und einem Tiefstwert von 5 °C. Die Wolkenbedeckung beträgt 74 %, und es wird mit etwa 1 mm Niederschlag gerechnet. Der Wind weht mit 18 km/h, mit Böen bis zu 40 km/h. - **Sonntag, 30. März**: Auch hier wird es regnerisch sein, mit ähnlichen Temperaturen von etwa 12 °C und einem Tiefstwert von 5 °C. Die Wolkenbedeckung liegt bei 79 %, und es wird mit etwa 2 mm Niederschlag gerechnet. Der Wind weht stärker mit 20 km/h, mit Böen bis zu 42 km/h.