HA + MA no voice integration working, throws error

Most of the time, you want to request a song, and then let radio_mode continue playing songs after its finished. It often chooses similar songs by similar artists, but lets say i really dont like leonard cohen, or im just really in a tom waits mood, i want the radio_mode to only select other tom waits song to play after the initial song is finished. Any thoughts on how we might approach this?

This is actually the only way I got this to work, by using the artist command: Play something by Tom Jones - it will then play only songs by Tom Jones and will not stop after the first song, but continue to play songs.

How to “shuffle” different songs from different artists that are somewhat similar - I don’t know.

hm. I hadn’t tried that but will give it a try and report back. Ideally, the way the flow often works is, i want to request a specific song by a specific artist, and then let radio mode play other songs by the same artist. Not necessarily start with a random song by an specific artist.

Will keep tinkering…

Edit:
Add an action to queue up “Tom Waits” as the artist? Basically take @morkator’s approach and add it to the end of the sequence? Will give it a shot when I have a chance.

They have published a repo with the pending blueprint: GitHub - music-assistant/blueprints: Music Assistant blueprints.

I added it and tested some basic playback on one Voice PE device and it seems to work. I haven’t had a chance to dig into anything advanced yet. I’m sure it’s a work in progress, but it is a start. It looks like they are taking issue reports there though, so if anyone wants to test provide feedback it may help everyone out.

This certaintly seems cool, would love for it to work but i cant seem to get it to.

“Play rock the casbah by the clash”

“Sorry, I am not aware of any area called play rock”

The blueprint created the automation, and it looks like its set up correctly…i dont know, it’s all so tiresome.

Can you link to the change log that fixed this? I see one in the core integration on 2-Jan that claims a fix for Voice Assistant PE (not sure why it would be specific to that voice hardware. . .).

I hadn’t looked too closely, but you’re right, thats the only obvious change.

My voice commands would be parsed and intent matched, but the media_player would not play. After this update, with nothing else changed, suddenly it started working. I had just assumed that one commit for voice had something to do with it, dont know why it’s voice PE only though.

Are you using Voice PE or another device for voice? I’m only in the testing stages now planning for my Alexa transition. I set it up just using my phone at first. I tried this about 3 weeks ago and ran into the issue of using the Core Addon and it wasn’t working, and I gave up.

Timers and my kid playing spotify represent easily a 95% solution to what we use Alexa for, so getting the music player right will be key for me. I plan to use the OpenAI agent until fully local options become available. I have other reasons to get rid of Alexa rather than just the “cloud” aspect. I’m trying to rid myself of using 5 echo dots + a big multizone amp for my whole house audio and switch to Louder-ESP units and MA, but since Alexa can’t play to MA, I need a voice solution.

Thanks again.

Thank you for pointing this out! This blueprint was the missing piece for me… I had set up all the custom sentences and intents, but somehow the intents just weren’t matching… quite possible I missed something, but importing this blueprint got it working again!

I’m planning on another go at this over the weekend.

So you implemented the guidance in the MA voice setup document including the intents file in custom sentences and then added the blueprint and now it’s working?

I guess I don’t fully understand intents vs blueprints as from my quick look it seems as if they are implementing very similar code.

I too don’t completely understand which puzzle pieces are needed and how and where put together.

I am mostly using raspberry pi with respeaker pi hats. Waiting on back order for the voice PE.

I got the blueprint working. I had to remove all the complex sentence commands provided by the blueprint, and create new simpler ones that call on the same variables like {media_name}. Doing this has given me a pretty stable voice control over MA.

Would you mind attaching your blueprint ymal to give me a start?

I like your intents file that uses the default location as we really only use one player for most of our voice activated music.

Here is the stable version of the automation, been playing with a different one to tweak some things. I think the only thing changed here is the sentence commands. Make sure you change the default_player_entity:

alias: Music Assistant
description: ""
triggers:
  - trigger: conversation
    command:
      - play the album {media_name} by {artist}
    id: album
  - trigger: conversation
    command:
      - play the song {media_name} by {artist}
    id: track
  - trigger: conversation
    command:
      - play music by {media_name}
    id: artist
  - trigger: conversation
    command:
      - play the radio station {media_name}
    id: radio
  - trigger: conversation
    command:
      - listen to the playlist {media_name}
    id: playlist
conditions: []
actions:
  - variables:
      default_player_entity_id: media_player.kitchen_speakers
      trigger_id: "{{ trigger.id }}"
      media_name: "{{ trigger.slots.media_name }}"
      media_type: |
        {% if 'radio' in media_name or 'Radio' in media_name %}
          radio
        {% else %}
          {{ trigger_id }}
        {% endif %}
      artist: "{{ trigger.slots.artist }}"
      area_or_player_name: "{{ trigger.slots.area_or_player_name }}"
      assist_device_id: "{{ trigger.device_id }}"
      radio_mode_str: "{{ trigger.slots.radio_mode or '' }}"
      radio_mode: "{{ 'radio' in radio_mode_str.lower() }}"
      player_entity_id_by_player_name: >
        {{ expand(integration_entities('music_assistant')) |
        selectattr("attributes.mass_player_type", 'defined') |
        selectattr("attributes.friendly_name", 'equalto', area_or_player_name) |
        join(', ', attribute="entity_id") }}
      player_entity_id_by_area_name: >
        {{ expand(area_entities(area_or_player_name)) |
        selectattr("attributes.mass_player_type", 'defined') |
        selectattr("attributes.friendly_name", 'equalto', area_or_player_name) |
        join(', ', attribute="entity_id") }}
      player_entity_id_by_assist_area: |
        {% if assist_device_id and area_id(assist_device_id)  %}
          {{ expand(area_entities(area_id(assist_device_id))) | selectattr("attributes.mass_player_type", 'defined')  | join(', ', attribute="entity_id") }}
        {% else %}
          None
        {% endif %}
      mass_player_entity_id: |
        {% if player_entity_id_by_player_name %}
          {{ player_entity_id_by_player_name }}
        {% elif player_entity_id_by_area_name %}
          {{ player_entity_id_by_area_name }}
        {% elif player_entity_id_by_assist_area  %}
          {{ player_entity_id_by_assist_area }}
        {% else %}
          {{ default_player_entity_id }}
        {% endif %}
      mass_player_name: "{{ state_attr(mass_player_entity_id, 'friendly_name') }}"
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ media_type == 'album' }}"
        sequence:
          - action: music_assistant.play_media
            metadata: {}
            data:
              media_id: "{{ media_name }}"
              artist: "{{ artist }}"
              media_type: "{{ media_type }}"
              radio_mode: "{{ radio_mode }}"
            target:
              entity_id: "{{ mass_player_entity_id }}"
      - conditions:
          - condition: template
            value_template: "{{ media_type == 'track' }}"
        sequence:
          - action: music_assistant.play_media
            metadata: {}
            data:
              media_id: "{{ media_name }}"
              artist: "{{ artist }}"
              media_type: "{{ media_type }}"
              radio_mode: "{{ radio_mode }}"
            target:
              entity_id: "{{ mass_player_entity_id }}"
      - conditions:
          - condition: template
            value_template: "{{ media_type == 'artist' }}"
        sequence:
          - action: music_assistant.play_media
            metadata: {}
            data:
              media_id: "{{ media_name }}"
              media_type: "{{ media_type }}"
              radio_mode: "{{ radio_mode }}"
            target:
              entity_id: "{{ mass_player_entity_id }}"
      - conditions:
          - condition: template
            value_template: "{{ media_type == 'radio' }}"
        sequence:
          - action: music_assistant.play_media
            metadata: {}
            data:
              media_id: "{{ media_name }}"
              media_type: "{{ media_type }}"
            target:
              entity_id: "{{ mass_player_entity_id }}"
      - conditions:
          - condition: template
            value_template: "{{ media_type == 'playlist' }}"
        sequence:
          - action: music_assistant.play_media
            metadata: {}
            data:
              media_id: "{{ media_name }}"
              media_type: "{{ media_type }}"
              radio_mode: "{{ radio_mode }}"
            target:
              entity_id: "{{ mass_player_entity_id }}"
  - set_conversation_response: "{{ trigger.slots.media_name }} playing on {{ mass_player_name }}"
mode: single

I was previously using the custom_sentence.yaml method, but using automations is actually giving me more stable results. I have certain custom ‘status’ intents, that were buggy, but for whatever reason, after translating them to automations the other day, they have been working much better.

1 Like

Thank you so much for sharing this automation @sobchek! I am just dipping my toe into this, so never used the HACS version of Music Assistant. I tried the custom_sentences method outlined by the MA team but it never really worked. With this automation, I am able to play music via the triggers you have established. Here’s hoping we begin to see more complex commands and in particular, “stop” soon!

In case anyone is in my boat, this automation worked for me with no custom_sentences or intent scripts.

Been working on group player automations. here is my current working version, seems pretty stable. It uses dynamic groups in MA, so no need to make pre-configured groups inside MA.

automation.music_assistant_v05

alias: Music Assistant v05 (Stable with Enqueue)
description: MA commands for individual and dynamic groups with enqueue functionality
triggers:
  - trigger: conversation
    command:
      - play the song {media_name} by {artist}
      - on the {group} play the song {media_name} by {artist}
    id: track
  - trigger: conversation
    command:
      - play the album {media_name} by {artist}
      - on the {group} play the album {media_name} by {artist}
    id: album
  - trigger: conversation
    command:
      - play the radio station {media_name}
      - on the {group} play the radio station {media_name}
    id: radio
  - trigger: conversation
    command:
      - play the playlist {media_name}
      - on the {group} play the playlist {media_name}
    id: playlist
  - trigger: conversation
    command:
      - add the song {media_name} by {artist} to the queue
      - on the {group} add the song {media_name} by {artist} to the queue
    id: enqueue
conditions: []
actions:
  - variables:
      media_name: "{{ trigger.slots.media_name }}"
      artist: "{{ trigger.slots.artist }}"
      issuing_player: >
        {% set device_id = trigger.device_id %}
        {{ expand(area_entities(area_id(device_id))) | selectattr('domain', 'equalto', 'media_player') | map(attribute='entity_id') | first }}
      trigger_id: "{{ trigger.id }}"
      media_type: |
        {% if trigger_id == 'track' %}
          track
        {% elif trigger_id == 'album' %}
          album
        {% elif trigger_id == 'radio' %}
          radio
        {% elif trigger_id == 'playlist' %}
          playlist
        {% elif trigger_id == 'enqueue' %}
          track
        {% else %}
          default
        {% endif %}
      enqueue: |
        {% if trigger_id == 'enqueue' %}
          add
        {% else %}
          replace
        {% endif %}
      group_mapping:
        downstairs:
          - media_player.kitchen_speakers
          - media_player.playroom
        upstairs:
          - media_player.kidA_room
        all:
          - media_player.kitchen_speakers
          - media_player.kidA_room
          - media_player.playroom
      group_aliases:
        upstairs:
          - upstairs speakers
          - second floor
          - second floor speakers
          - bedroom speakers
          - bedrooms
        downstairs:
          - downstairs speakers
          - first floor
          - first floor speakers
          - main floor
          - main floor speakers
        all:
          - house
          - house speakers
          - every
          - every speaker
      normalized_group: >
        {% set group = (trigger.slots.group | string).lower() %}
        {% for key, aliases in group_aliases.items() %}
          {% if group in aliases %}
            {{ key }}
          {% endif %}
        {% endfor %}
      group_players: >
        {{ group_mapping[normalized_group] if normalized_group in group_mapping else [] }}
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ group_players | length > 0 }}"
        sequence:
          - data:
              primary_player: "{{ issuing_player }}"
              group_players: "{{ group_players }}"
            action: script.sync_group_players
      - conditions:
          - condition: template
            value_template: "{{ group_players | length == 0 }}"
        sequence:
          - data:
              media_id: "{{ media_name }}"
              media_type: "{{ media_type }}"
              artist: "{{ artist }}"
              primary_player: "{{ issuing_player }}"
              enqueue: "{{ enqueue }}"
            action: script.playback_on_primary_player
mode: parallel

script.playback_on_primary_player:


alias: Playback on Primary Player
sequence:
  - data:
      media_id: "{{ media_id }}"
      artist: "{{ artist }}"
      media_type: "{{ media_type }}"
      enqueue: "{{ enqueue }}"
    target:
      entity_id: "{{ primary_player }}"
    action: music_assistant.play_media
mode: single
description: ""

script.sync_group_players:


alias: Script - Sync Group Players
sequence:
  - data:
      entity_id: "{{ group_players }}"
    action: media_player.turn_on
  - delay:
      seconds: 5
  - data:
      group_members: "{{ group_players }}"
      entity_id: "{{ primary_player }}"
    action: media_player.join
  - delay:
      seconds: 3
  - data:
      group_members: "{{ group_players }}"
      entity_id: "{{ primary_player }}"
    action: media_player.join
mode: single

You can define your groups in this block in the automation:

group_mapping:
        downstairs:
          - media_player.kitchen_speakers
          - media_player.playroom
        upstairs:
          - media_player.kidA_room
        all:
          - media_player.kitchen_speakers
          - media_player.kidA_room
          - media_player.playroom