šŸ’™ Blueprints for Voice Commands (Weather, Calendar, Music Assistant)

Possibly a lame question, but how/where can I see which version of the Weather Forecast Blueprint I currently have installed?
I can see on GitHUB that the latest is 2025318, but which version do I have?

I’ve found that if you use Google Generative AI (Gemini) instead of ChatGPT, it doesn’t pronounce the *'s. However, it does spell out km/h instead of saying Kilometers per hour. I’d love to get that corrected.

Put an addition near the top of your prompt reminding the LLM that
You are servicing a voice interface and should NOT use markdown under any circumstances it produces confusing output when read aloud.

It’s not perfect but it calms it down a lot. The problem is the llm is generating markdown text elements and your TTS isn’t filtering them (it’s a known with the default piper)

Hey, overall I’m loving the large LLM weather blueprint. I’m using OpenWeather Map as the source data and Google Generative AI for the LLM side.

It works fairly well, but it persists in spelling out KM/H instead of speaking ā€œkilometres per hourā€. I’ve tried adding ā€œpronounce km/h as kilometres per hourā€ to the default four time prompts, but the output is unchanged.

Also, if one uses ChatGPT, there are many colons and asterisks that get pronounced. I’ve not found a way to stop that when using Chat GPT.

How does one edit the base prompt that the script sends to the LLM?

I’m experienced with HA, but quite new to this LLM stuff as I just built a few Wyoming Satellites a few days ago.

Also, many thanks for all you do!!

You have multiple things. Spelling kmh is different than markdown. We can clamp markdown with one statement because it’s all part of one construct.

The default prompt is the one inside settings on the configure button under the integration. My markdown clamp os literally the third thing listed.

Alternately you can use a different TTS enginenthat filters it

The km/h would require a different tack and won’t be universal to LLMs. You could try don’t abbreviate (as a generalization) in the prompt but it’s probably too broad.

That said both are technically out of the scope of Fez’ blueprints. So if you want to chase it down you’ll want to open a separate thread.

1 Like

Makes sense… I’ve updated the description to contain ā€œPronounce km/h as kilometres per hour.ā€ and that fixes it nicely. Rather than fight with markdown, I’ll use the Goggle Generative AI as it understands that markdown should not be pronounced.

Greatly appreciated!

1 Like

Hi all, just thought I would throw this script into the mix as well.

I have found that my Gemini voice pipeline struggles with volume commands. Simple commands such as ā€œturn it downā€ often has it asking me which media player it wants me to target, and how much I want to turn it down by. It then often seems to have difficulty accessing the existing volume level, resulting in it either maximising or muting volume. No amount of prompt modification seemed to fix this. It should be simpler!

This script works in a similar way to the Music Assistant script (in fact it is set up to use Music Assistant media players). Expose it to your voice pipeline and you should find that stating ā€œturn it upā€ or similar will raise the volume by 0.1 on the media player in the room you’re in. It should handle all other volume/muting requests too.

Thanks for these but I can’t get it to work either with a full local voice pipeline or one that’s all using providers. The reply is always just ā€œDoneā€ when I ask what is the weather. Any ideas on where I went wrong? I’m using the built in weather provider to HA if that changes anything. Thanks!

@Shawneau Does it trigger the automation when you ask that?

1 Like

I’d check that in logbook? Sorry I’m about a week in to this…

Go to Settings > Automations and search for the automation in the list. It will show the date and/or time it last triggered.
If it did trigger, you can access the traces of the automation by clicking on the 3-dot button next to it. That will give an indication what’s going wrong

1 Like

It did not trigger.

FWIW, when I disable/delete this automation I can ask Voice Assistant for the weather and it will give me the temp and basic conditions, I think that functionality comes out of the box with HA.

My Voice pipeline works for everything else, my local one uses ollama with Gemma 3 with tools with Assist turned off, and local TTS and STT via faster whisper and Kokoro respectively via API.

@Shawneau I can reproduce it, but he strange thing is that I don’t have an issue in Dutch, using the same blueprint.

It has also been logged on GitHub

I hope to have some time to look into this tomorrow

1 Like

Ok great much appreciated!

Just starting to play around with this for music assistant. Had an issue where it told me audibly that it couldn’t figure out what album I meant, but it played it anyway. If I look in the debug info, I see an error regarding shuffle boolean status. Not sure if that’s a real issue, or just that the LLM I’m using (qwen2.5-3b) is not powerful enough.

    It seems your request to play the album 'Accelerated Evolution' in the
    kitchen has encountered a glitch. Could you please provide more
    details? Perhaps there's a different album or artist you're interested
    in, or maybe another area where you'd like this music to be played?
    And for the shuffle setting, could it be that we need something
    simpler? Let’s make sure the request is clear and our system can
    understand it properly.
invalid boolean value <function shuffle at 0x7f882f82cb80> for
            dictionary value @ data['shuffle']

Full details:

  - type: intent-start
    data:
      engine: conversation.qwen2_5_3b
      language: en
      intent_input: " Play the album, Accelerated Evolution, in the kitchen."
      conversation_id: 01JW4PWE6HJD4S4ZV724HN89V1
      device_id: c6d476133936d444524825dda8a2c54f
      prefer_local_intents: false
    timestamp: "2025-05-25T21:56:54.952317+00:00"
  - type: intent-progress
    data:
      chat_log_delta:
        role: assistant
        tool_calls:
          - tool_name: llm_script_for_music_assistant_voice_requests
            tool_args:
              area:
                - kitchen
              media_id: Accelerated Evolution
              media_type: album
            id: 01JW4Q693WKXVGAG0B5BMCN8HA
        content: ""
    timestamp: "2025-05-25T21:57:10.396086+00:00"
  - type: intent-progress
    data:
      chat_log_delta:
        content: ""
    timestamp: "2025-05-25T21:57:10.553978+00:00"
  - type: intent-progress
    data:
      chat_log_delta:
        role: tool_result
        agent_id: conversation.qwen2_5_3b
        tool_call_id: 01JW4Q693WKXVGAG0B5BMCN8HA
        tool_name: llm_script_for_music_assistant_voice_requests
        tool_result:
          error: MultipleInvalid
          error_text: >-
            invalid boolean value <function shuffle at 0x7f882f82cb80> for
            dictionary value @ data['shuffle']
    timestamp: "2025-05-25T21:57:11.127533+00:00"
  - type: intent-progress
    data:
      chat_log_delta:
        role: assistant
        content: >-
          It seems your request to play the album 'Accelerated Evolution' in the
          kitchen has encountered a glitch. Could you please provide more
          details? Perhaps there's a different album or artist you're interested
          in, or maybe another area where you'd like this music to be played?
          And for the shuffle setting, could it be that we need something
          simpler? Let’s make sure the request is clear and our system can
          understand it properly.
    timestamp: "2025-05-25T21:57:19.494895+00:00"
  - type: intent-end
    data:
      processed_locally: false
      intent_output:
        response:
          speech:
            plain:
              speech: >-
                It seems your request to play the album 'Accelerated Evolution'
                in the kitchen has encountered a glitch. Could you please
                provide more details? Perhaps there's a different album or
                artist you're interested in, or maybe another area where you'd
                like this music to be played? And for the shuffle setting, could
                it be that we need something simpler? Let’s make sure the
                request is clear and our system can understand it properly.
              extra_data: null
          card: {}
          language: en
          response_type: action_done
          data:
            targets: []
            success: []
            failed: []
        conversation_id: 01JW4PWE6HJD4S4ZV724HN89V1
        continue_conversation: false
    timestamp: "2025-05-25T21:57:19.495460+00:00"

Should be fixed now, you can redownload the blueprint to update.

1 Like

Best to post that on the Music Assistant github. The shuffle functionality has been added by another maintainer, so he might know better how to fix it.

I had a look anyway, and I think it should be fixed now.

1 Like

Thank you for the fix! Just a related question, I assume for the weather full LLM script, in settings your LLM for Conversation Agent must have it’s ā€œAssistā€ checkbox on for this to work right? The script is firing now but it’s taking a very long time. I have a feeling the llm is getting too much context to deal with does anyone have any best practices for keeping the # of tokens down? I’m on Ollama set up with a 24GB RTX5000 card so it’s decently fast but not when context window fully used. I’m using Gemma3 27B that’s tuned for tool use.