Possibly a lame question, but how/where can I see which version of the Weather Forecast Blueprint I currently have installed?
I can see on GitHUB that the latest is 2025318, but which version do I have?
Iāve found that if you use Google Generative AI (Gemini) instead of ChatGPT, it doesnāt pronounce the *'s. However, it does spell out km/h instead of saying Kilometers per hour. Iād love to get that corrected.
Put an addition near the top of your prompt reminding the LLM that
You are servicing a voice interface and should NOT use markdown under any circumstances it produces confusing output when read aloud.
Itās not perfect but it calms it down a lot. The problem is the llm is generating markdown text elements and your TTS isnāt filtering them (itās a known with the default piper)
Hey, overall Iām loving the large LLM weather blueprint. Iām using OpenWeather Map as the source data and Google Generative AI for the LLM side.
It works fairly well, but it persists in spelling out KM/H instead of speaking ākilometres per hourā. Iāve tried adding āpronounce km/h as kilometres per hourā to the default four time prompts, but the output is unchanged.
Also, if one uses ChatGPT, there are many colons and asterisks that get pronounced. Iāve not found a way to stop that when using Chat GPT.
How does one edit the base prompt that the script sends to the LLM?
Iām experienced with HA, but quite new to this LLM stuff as I just built a few Wyoming Satellites a few days ago.
Also, many thanks for all you do!!
You have multiple things. Spelling kmh is different than markdown. We can clamp markdown with one statement because itās all part of one construct.
The default prompt is the one inside settings on the configure button under the integration. My markdown clamp os literally the third thing listed.
Alternately you can use a different TTS enginenthat filters it
The km/h would require a different tack and wonāt be universal to LLMs. You could try donāt abbreviate (as a generalization) in the prompt but itās probably too broad.
That said both are technically out of the scope of Fezā blueprints. So if you want to chase it down youāll want to open a separate thread.
Makes sense⦠Iāve updated the description to contain āPronounce km/h as kilometres per hour.ā and that fixes it nicely. Rather than fight with markdown, Iāll use the Goggle Generative AI as it understands that markdown should not be pronounced.
Greatly appreciated!
Hi all, just thought I would throw this script into the mix as well.
I have found that my Gemini voice pipeline struggles with volume commands. Simple commands such as āturn it downā often has it asking me which media player it wants me to target, and how much I want to turn it down by. It then often seems to have difficulty accessing the existing volume level, resulting in it either maximising or muting volume. No amount of prompt modification seemed to fix this. It should be simpler!
This script works in a similar way to the Music Assistant script (in fact it is set up to use Music Assistant media players). Expose it to your voice pipeline and you should find that stating āturn it upā or similar will raise the volume by 0.1 on the media player in the room youāre in. It should handle all other volume/muting requests too.
Thanks for these but I canāt get it to work either with a full local voice pipeline or one thatās all using providers. The reply is always just āDoneā when I ask what is the weather. Any ideas on where I went wrong? Iām using the built in weather provider to HA if that changes anything. Thanks!
@Shawneau Does it trigger the automation when you ask that?
Iād check that in logbook? Sorry Iām about a week in to thisā¦
Go to Settings > Automations and search for the automation in the list. It will show the date and/or time it last triggered.
If it did trigger, you can access the traces of the automation by clicking on the 3-dot button next to it. That will give an indication whatās going wrong
It did not trigger.
FWIW, when I disable/delete this automation I can ask Voice Assistant for the weather and it will give me the temp and basic conditions, I think that functionality comes out of the box with HA.
My Voice pipeline works for everything else, my local one uses ollama with Gemma 3 with tools with Assist turned off, and local TTS and STT via faster whisper and Kokoro respectively via API.
@Shawneau I can reproduce it, but he strange thing is that I donāt have an issue in Dutch, using the same blueprint.
It has also been logged on GitHub
I hope to have some time to look into this tomorrow
Ok great much appreciated!
Just starting to play around with this for music assistant. Had an issue where it told me audibly that it couldnāt figure out what album I meant, but it played it anyway. If I look in the debug info, I see an error regarding shuffle boolean status. Not sure if thatās a real issue, or just that the LLM Iām using (qwen2.5-3b) is not powerful enough.
It seems your request to play the album 'Accelerated Evolution' in the
kitchen has encountered a glitch. Could you please provide more
details? Perhaps there's a different album or artist you're interested
in, or maybe another area where you'd like this music to be played?
And for the shuffle setting, could it be that we need something
simpler? Letās make sure the request is clear and our system can
understand it properly.
invalid boolean value <function shuffle at 0x7f882f82cb80> for
dictionary value @ data['shuffle']
Full details:
- type: intent-start
data:
engine: conversation.qwen2_5_3b
language: en
intent_input: " Play the album, Accelerated Evolution, in the kitchen."
conversation_id: 01JW4PWE6HJD4S4ZV724HN89V1
device_id: c6d476133936d444524825dda8a2c54f
prefer_local_intents: false
timestamp: "2025-05-25T21:56:54.952317+00:00"
- type: intent-progress
data:
chat_log_delta:
role: assistant
tool_calls:
- tool_name: llm_script_for_music_assistant_voice_requests
tool_args:
area:
- kitchen
media_id: Accelerated Evolution
media_type: album
id: 01JW4Q693WKXVGAG0B5BMCN8HA
content: ""
timestamp: "2025-05-25T21:57:10.396086+00:00"
- type: intent-progress
data:
chat_log_delta:
content: ""
timestamp: "2025-05-25T21:57:10.553978+00:00"
- type: intent-progress
data:
chat_log_delta:
role: tool_result
agent_id: conversation.qwen2_5_3b
tool_call_id: 01JW4Q693WKXVGAG0B5BMCN8HA
tool_name: llm_script_for_music_assistant_voice_requests
tool_result:
error: MultipleInvalid
error_text: >-
invalid boolean value <function shuffle at 0x7f882f82cb80> for
dictionary value @ data['shuffle']
timestamp: "2025-05-25T21:57:11.127533+00:00"
- type: intent-progress
data:
chat_log_delta:
role: assistant
content: >-
It seems your request to play the album 'Accelerated Evolution' in the
kitchen has encountered a glitch. Could you please provide more
details? Perhaps there's a different album or artist you're interested
in, or maybe another area where you'd like this music to be played?
And for the shuffle setting, could it be that we need something
simpler? Letās make sure the request is clear and our system can
understand it properly.
timestamp: "2025-05-25T21:57:19.494895+00:00"
- type: intent-end
data:
processed_locally: false
intent_output:
response:
speech:
plain:
speech: >-
It seems your request to play the album 'Accelerated Evolution'
in the kitchen has encountered a glitch. Could you please
provide more details? Perhaps there's a different album or
artist you're interested in, or maybe another area where you'd
like this music to be played? And for the shuffle setting, could
it be that we need something simpler? Letās make sure the
request is clear and our system can understand it properly.
extra_data: null
card: {}
language: en
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: 01JW4PWE6HJD4S4ZV724HN89V1
continue_conversation: false
timestamp: "2025-05-25T21:57:19.495460+00:00"
Should be fixed now, you can redownload the blueprint to update.
Best to post that on the Music Assistant github. The shuffle functionality has been added by another maintainer, so he might know better how to fix it.
I had a look anyway, and I think it should be fixed now.
Thank you for the fix! Just a related question, I assume for the weather full LLM script, in settings your LLM for Conversation Agent must have itās āAssistā checkbox on for this to work right? The script is firing now but itās taking a very long time. I have a feeling the llm is getting too much context to deal with does anyone have any best practices for keeping the # of tokens down? Iām on Ollama set up with a 24GB RTX5000 card so itās decently fast but not when context window fully used. Iām using Gemma3 27B thatās tuned for tool use.
