Using Assist with Spotify integration, search

Hi,

So I’m having a lot of fun playing with setting up custom sentences with Assist, where I’ve got various sentences to work (asking for weather etc) and now I’m trying to see if it’s possible to somehow use Assist with Spotify.

I have the Spotify integration, and Spotcast so I can play to an idle Spotify Connect speaker. I’ve noticed in the Spotcast service, that there’s a search entity, which allows me to play anything based on that data (Metallica, Muse etc), and it works great in a standard automation.

Now I want to start simple, and just trigger the spotcast service with Assist. Here’s my simple intents code:

# Example config/custom_sentences/en/spotify_sentances.yaml
language: "en"
intents:
  Spotify:
    data:
      - sentences:
          - "Play Metallica on Bedroom speaker"

And here’s what I’m trying to add to my config file:

# Spotify
  Spotify:
    speech:
      text: "No problem, playing Metallica on bedroom speaker"
    action:
      service: spotcast.start
      data:
        limit: 20
        force_playback: false
        random_song: false
        repeat: "off"
        shuffle: true
        offset: 0
        ignore_fully_played: false
        entity_id: media_player.bedroom_speaker
        start_volume: 15
        search: Metallica

But so far, I can’t get it to work. Actually, once I add the above code into my custom_sentences and config files, I can’t use any of my custom intents, and just receive the following (if I remove the above, then I can use my custom intents again):

So I was just wondering if someone could help me figure out what’s going wrong? Any help would be greatly appreciated.

OK apologies, it seems to be working now. I’ve found that if there are some empty lines in the custom_sentances yaml file created (I made one specifically for Spotify, as shown above), then HA throws the red error as shown above. Cleaning up the empty lines seems to have helped, and now it works.

So my next question is about using the search field/function in Spotcast with assist. I’m thinking that this someone will be using wildcards, but I’m not sure how to add this into my sentence, and how to feed this into the search field under action/data.

Would anyone be able to point me in the right direction? How should I modify the above code to utilize the search function in Spotcast?

My end goal, is to be able to use Assist, just like I would use Google Assistant, to play any music I want, on any of our Spotify Connect :slight_smile:

1 Like

Thank you for this great idea. I was also looking for a solution like this and didn’t know spotcast is able to search for media.

This is my custom sentence:

language: "en"
intents:
  Spotify:
    data:
      - sentences:
        - "Play {media_search} on Bedroom speaker"
lists:
  media_search:
    wildcard: true

This is my config file:

Spotify:
  speech: 
    text: "No problem, playing {{media_search}} on bedroom speaker."
  action:
    service: spotcast.start
    data:
      entity_id: mediaplayer.bedroom_speaker
      search: "{{media_search}}"

It’s also possible to use a simple automation with a sentence trigger using a wildcard

Play {media_search} on bedroom speaker

And the following action:

service: spotcast.start
data:
  entity_id: mediaplayer.bedroom_speaker
  search: "{{trigger.slots.media_search}}"

Final Result (in german):

image

1 Like

You can add a second wildcard for the speaker on which it should be played, or add a list with all possible options

Awesome guys, thanks so much for the replies, look forward to trying out the code above, and adding the second wildcard for the speaker selection. I’ll let you know how I get on :slight_smile:

1 Like

Hey @TheFes & @derski just wanted to say a massive thank you for your help with the code, it works great!! I can search Spotify, and choose the media player I want with my voice, which is awesome!!

My goal is to replicate all the things I do with Google Assistant, with Assist (I’m in the middle of testing a small Jabra conference speaker, which works great, and will soon be building some satellites to place around the house).

OK so the next part I’m trying to do now, is pause and resume music again. I can do this no problem, however, I would like a response that will say the speaker name (Kitchen Speaker). I can get this name, but it responds with the entity name (media_player.kitchen_speaker), rather than the device name. How can I do this? This is the custom sentence I have tried so far (taken mostly from the documentation, which made it easy)…

  PlayMedia:
    data:
      - sentences:
          - "(Play|resume) [the] (music|song|playlist) [on] {media_player}"
lists:
  media_player:
    values:
      - in: "(bedroom speaker)"
        out: "media_player.bedroom_speaker"
      - in: "(office speaker)"
        out: "media_player.office_nest_audio"
      - in: "(living room speaker)"
        out: "media_player.living_room_speaker"
      - in: "(kitchen speaker)"
        out: "media_player.kitchen_speaker"
      - in: "(Noah's speaker)"
        out: "media_player.noah_s_speaker"
      - in: "(garage speaker)"
        out: "media_player.garage_nest_mini"
      - in: "(TV)"
        out: "media_player.tv"
      - in: "(speaker group)"
        out: "media_player.speaker_group"

And this is the response/config, where the response comes with the entity name.

  PlayMedia:
    speech:
      text: "OK, resuming the music on {{media_player}}"
    action:
      service: media_player.media_play
      data:
        entity_id: "{{ media_player }}"
        
  lists:
    media_player:
      values:
        - in: "(media_player.bedroom_speaker)"
          out: "bedroom speaker"
        - in: "(media_player.office_nest_audio)"
          out: "office speaker"
        - in: "(media_player.living_room_speaker)"
          out: "living room speaker"
        - in: "(media_player.kitchen_speaker)"
          out: "kitchen speaker"
        - in: "(media_player.noah_s_speaker)"
          out: "Noah's speaker"
        - in: "(media_player.garage_nest_mini)"
          out: "garage speaker"
        - in: "(media_player.tv)"
          out: "TV"
        - in: "(media_player.speaker_group)"
          out: "speaker group"

And here’s the response, where I would like 'media_player.kitchen_speaker replaced with ‘kitchen speaker’.

1 Like

This works on my setup :slight_smile:

"OK, resuming music on {{ state_attr(media_player, 'friendly_name') }}"
1 Like

Would you be able to share your code please? This is the one Assist feature I’ve really been waiting for! Thanks

Hey, yes no problem at all, it was also one of the main things I’ve been waiting for to :slight_smile:

This is what I have so far, will probably add more to it once I find other relevant media controls. This bit goes in your custom_sentences folder (make one, if you don’t have it already).

# Example config/custom_sentences/en/media.yaml
language: "en"
intents:
  Spotify:
    data:
      - sentences:
        - "Play {media_search} on {media_player}"
  SetVolume:
    data:
      - sentences:
          - "(set|change) {media_player} volume to {volume} [%|percent]"
          - "(set|change) [the] volume for {media_player} to {volume} [%|percent]"
          - "(set|change) {media_player} to {volume} [%|percent]"
  StopMedia:
    data:
      - sentences:
          - "(Stop|Pause) the (music|song|playlist) on {media_player}"
  PlayMedia:
    data:
      - sentences:
          - "(Play|resume) [the] (music|song|playlist) [on] {media_player}"
  StreamCamera:
    data:
      - sentences:
          - "Show [me] the {camera} on [the] {media_player}"
  NextTrack:
    data:
      - sentences:
          - "[Play the] (next|skip) track"
lists:
  media_search:
    wildcard: true
  media_player:
    values:
      - in: "(bedroom speaker)"
        out: "media_player.bedroom_speaker"
      - in: "(office speaker)"
        out: "media_player.office_nest_audio"
      - in: "(living room speaker)"
        out: "media_player.living_room_speaker"
      - in: "(kitchen speaker)"
        out: "media_player.kitchen_speaker"
      - in: "(Noah's speaker|Noah speaker)"
        out: "media_player.noah_s_speaker"
      - in: "(garage speaker)"
        out: "media_player.garage_nest_mini"
      - in: "(TV|tv)"
        out: "media_player.tv"
      - in: "(speaker group)"
        out: "media_player.speaker_group"
      - in: "(bedroom tv|bedroom TV)"
        out: "media_player.bedroom_tv"
      - in: "(kitchen display)"
        out: "media_player.kitchen_display"
  volume:
    range:
      from: 0
      to: 100
  camera:
    values:
      - in: "(front door)"
        out: "camera.front_door"
      - in: "(garage|garage camera|garage cam)"
        out: "camera.garage_cam"
      - in: "(battery cam|battery camera)"
        out: "camera.battery_cam"

And this goes in your config. I’ve put mine in a packages folder, just to help keep my config neat and tidy, as I can feel I’m adding a lot more config entries at the moment.

# Spotify
  Spotify:
    speech: 
      text: "No problem, playing {{media_search}} on {{media_player}}."
    action:
      service: spotcast.start
      data:
        entity_id: "{{media_player}}"
        search: "{{media_search}}"
        shuffle: true

# Media
  SetVolume:
    speech:
      text: "OK, setting {{media_player}} to {{volume}} percent."
    action:
      service: "media_player.volume_set"
      data:
        entity_id: "{{ media_player }}"
        volume_level: "{{ volume / 100.0 }}"

  StopMedia:
    speech:
      text: "OK, stopping the music"
    action:
      service: "media_player.media_pause"
      data:
        entity_id: "{{ media_player }}"
        
  PlayMedia:
    speech:
      text: "OK, resuming music on {{ state_attr(media_player, 'friendly_name') }}"
    action:
      service: "media_player.media_play"
      data:
        entity_id: "{{ media_player }}"
          
  NextTrack:
    action:
      service: media_player.media_next_track

Haven’t tested the next track one yet (last entry). Everything else works great!

And don’t forget to restart HA, reloading YAML isn’t enough when adding new files.

Brilliant, thanks so much!

1 Like

So the next action I’ve configured is the skip to next/previous track. Here’s the codes, if anyone’s interested.

Intents:

  NextTrack:
    data:
      - sentences:
          - "[Play the] (next|skip) track"
  PreviousTrack:
    data:
      - sentences:
          - "[Play the] (previous) track"
          - "Go back"

Intent scripts:

  NextTrack:
    speech:
      text: "Done"
    action:
      service: media_player.media_next_track
      data:
        entity_id: "all"`

  PreviousTrack:
    speech:
      text: "Done"
    action:
      service: media_player.media_previous_track
      data:
        entity_id: "all"

@celodnb I have followed along with everything you have done and are getting the following error when trying to check my configure prior to restarting

“Integration error: Spotify - Integration ‘Spotify’ not found.”
“Integration error: NextTrack- Integration ‘NextTrack’ not found.”

This occurs for every intent script in my config suggestions?

Hi, hmm not really sure to be honest. I assume you have both the Spotify and spotcast integrations installed?

Yes correct both are installed… May reinstall both iv had issues with Spotcast recently

Have gotten my config saved, by splitting up the intent script in to “intent_script.yaml”
and thrown an Include in to config, how ever it seems my assist wont utilize the customer sentences

Resolved Human error

This is beyond cool, I appreciate this community so much. The new room aware features should mean we can fill in the media player with the media player in the same room as the voice capture device if one isn’t provided as well. This combination of features is what I was looking for to start using voice. My family uses ‘add X to grocery list’ (into AnyList), and then playing music on Alexa enabled Sonos speakers as the two things we really use voice for.

Just wanted to shout out to those of you working on this! Looking forward to testing what you all have done.

1 Like

Question about the spotify integration here. It looks like you’re using spotcast but i was curious if the regular spotify integration would work as well? Also, are the things that go into your config.yaml top level things or are they nested under like intents or some other yaml property? That part im confused about. Also, i am jumping back and forth between LMS and Music Assistant (as MA has been a bit buggy) and was wondering if anyone has integrated either into their voice commands?

Please can you explain the whole process and steps for this automation?

I managed to install spotcast and I cant make it work.

I have HA installed on Raspberry Pi, Speakers connected with jack on raspberry.
Raspberry works as Spotify Connect.
I want to voice assist Raspberry to search the song and play