So I’m having a lot of fun playing with setting up custom sentences with Assist, where I’ve got various sentences to work (asking for weather etc) and now I’m trying to see if it’s possible to somehow use Assist with Spotify.
I have the Spotify integration, and Spotcast so I can play to an idle Spotify Connect speaker. I’ve noticed in the Spotcast service, that there’s a search entity, which allows me to play anything based on that data (Metallica, Muse etc), and it works great in a standard automation.
Now I want to start simple, and just trigger the spotcast service with Assist. Here’s my simple intents code:
# Example config/custom_sentences/en/spotify_sentances.yaml
language: "en"
intents:
Spotify:
data:
- sentences:
- "Play Metallica on Bedroom speaker"
And here’s what I’m trying to add to my config file:
But so far, I can’t get it to work. Actually, once I add the above code into my custom_sentences and config files, I can’t use any of my custom intents, and just receive the following (if I remove the above, then I can use my custom intents again):
OK apologies, it seems to be working now. I’ve found that if there are some empty lines in the custom_sentances yaml file created (I made one specifically for Spotify, as shown above), then HA throws the red error as shown above. Cleaning up the empty lines seems to have helped, and now it works.
So my next question is about using the search field/function in Spotcast with assist. I’m thinking that this someone will be using wildcards, but I’m not sure how to add this into my sentence, and how to feed this into the search field under action/data.
Would anyone be able to point me in the right direction? How should I modify the above code to utilize the search function in Spotcast?
My end goal, is to be able to use Assist, just like I would use Google Assistant, to play any music I want, on any of our Spotify Connect
Awesome guys, thanks so much for the replies, look forward to trying out the code above, and adding the second wildcard for the speaker selection. I’ll let you know how I get on
Hey @TheFes & @derski just wanted to say a massive thank you for your help with the code, it works great!! I can search Spotify, and choose the media player I want with my voice, which is awesome!!
My goal is to replicate all the things I do with Google Assistant, with Assist (I’m in the middle of testing a small Jabra conference speaker, which works great, and will soon be building some satellites to place around the house).
OK so the next part I’m trying to do now, is pause and resume music again. I can do this no problem, however, I would like a response that will say the speaker name (Kitchen Speaker). I can get this name, but it responds with the entity name (media_player.kitchen_speaker), rather than the device name. How can I do this? This is the custom sentence I have tried so far (taken mostly from the documentation, which made it easy)…
Hey, yes no problem at all, it was also one of the main things I’ve been waiting for to
This is what I have so far, will probably add more to it once I find other relevant media controls. This bit goes in your custom_sentences folder (make one, if you don’t have it already).
And this goes in your config. I’ve put mine in a packages folder, just to help keep my config neat and tidy, as I can feel I’m adding a lot more config entries at the moment.
Have gotten my config saved, by splitting up the intent script in to “intent_script.yaml”
and thrown an Include in to config, how ever it seems my assist wont utilize the customer sentences
This is beyond cool, I appreciate this community so much. The new room aware features should mean we can fill in the media player with the media player in the same room as the voice capture device if one isn’t provided as well. This combination of features is what I was looking for to start using voice. My family uses ‘add X to grocery list’ (into AnyList), and then playing music on Alexa enabled Sonos speakers as the two things we really use voice for.
Just wanted to shout out to those of you working on this! Looking forward to testing what you all have done.
Question about the spotify integration here. It looks like you’re using spotcast but i was curious if the regular spotify integration would work as well? Also, are the things that go into your config.yaml top level things or are they nested under like intents or some other yaml property? That part im confused about. Also, i am jumping back and forth between LMS and Music Assistant (as MA has been a bit buggy) and was wondering if anyone has integrated either into their voice commands?
Please can you explain the whole process and steps for this automation?
I managed to install spotcast and I cant make it work.
I have HA installed on Raspberry Pi, Speakers connected with jack on raspberry.
Raspberry works as Spotify Connect.
I want to voice assist Raspberry to search the song and play