So I put together a little project to be able to play music i ask using snips. Yes, those of you who have embraced alexa and google can do this just fine, but for me this was as much about learning how to get intents and data into home assistant and what not.
Mopidy is set up on a raspberry pi that also host snips, music use one sound card, snips uses a jabra410 usb speakerphone.
So snips does the listening and home assistant reads the intents published and calls a play music script like this:
sequence:
- service: media_player.shuffle_set
data:
shuffle: 'true'
- service: shell_command.get_playlist
data_template:
playlist: "{{ playlist }}"
- service: shell_command.jarvis_says
data_template:
speech: '"OK, playing {{ states("sensor.mopidy_playlist") |replace("(by topsify)","") }} playlist"'
- delay: 00:00:03
- service: media_player.play_media
data_template:
entity_id: media_player.mopidy
media_content_type: playlist
media_content_id: >-
{{ states('sensor.mopidy_playlist') }}
- service: media_player.media_next_track
shell command defined here:
get_playlist: /home/homeassistant/.homeassistant/shell_command/get_playlist.py -vv -r -p ""{{ playlist }}""
Finally the python script (I removed a bunch of logging statements, check github if you want the working version this is just to show the logic)
import sys
import argparse
import subprocess
import random
from requests import post
import logging
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--playlist", help="playlist to search for", required="yes")
parser.add_argument("-r", "--random", help="choose random playlist if more than one", action="store_true")
parser.add_argument("-v", "--verbose", help="verbose output, can be specified multiple times", action="count")
args = parser.parse_args()
HA_BASE="/home/homeassistant/.homeassistant/"
HA_SENSOR="sensor.mopidy_playlist"
SPOTIFY_USER=subprocess.check_output(["grep", "spotify_user", HA_BASE+"secrets.yaml"]).rsplit()[1]
SPOTIFY_PASSWORD=subprocess.check_output(["grep", "spotify_password", HA_BASE+"secrets.yaml"]).rsplit()[1]
MPC_BINARY="/usr/bin/mpc"
MPC_HOST=subprocess.check_output(["grep", "mopidy_host", HA_BASE+"secrets.yaml"]).rsplit()[1]
# You can hard code these as well
REST_URL=subprocess.check_output(["grep", "http_base_url", HA_BASE+"secrets.yaml"]).rsplit()[1]
REST_PASSWORD=subprocess.check_output(["grep", "http_password", HA_BASE+"secrets.yaml"]).rsplit()[1]
PLAYLISTS=subprocess.check_output([MPC_BINARY, "-h", MPC_HOST, "lsplaylists"])
matching = [s for s in PLAYLISTS.split("\n") if args.playlist.lower() in s.lower()]
if matching:
if args.random:
playlist = random.choice(matching)
else:
playlist = matching[0]
else:
playlist="no match"
url = 'https://'+REST_URL+'/api/states/' + HA_SENSOR
headers = {'x-ha-access': REST_PASSWORD,
'content-type': 'application/json'}
data = '{"state": "'+playlist+'"}'
response = post(url, headers=headers, data=data)
So you can see it’s pretty straight forward. It uses mpc (a mopidy cli client) to get a list of scripts, greps for the playlist name in there and picks a random one from the returned list (so if there are two country ones it just grabs one). This is then fed into HA using the REST api to set a sensor (doesn’t need to be defined, HA will create it) which is then fed into a custom jarvis_says shell command that plays back to the snips jabra speaker and then sets the mopidy playlist.
Short demo of it in action, hard to hear since the speakers are turned down. The delay is mostly in the amazon polly since I hadn’t played those before (they are cached after).
Let me know if you have any questions. Check github for latest configs and what not. Still very much getting started but working on it.