Thanks so much for responding. Are you using any automations with media_player.play_media
? I’m really just trying to make a door chime sound on my homepods when the door opens via the iTunes-api integration.
I do use all speakers, including the airplay ones, in my alarm siren automation like this:
alarm_all_alarms:
alias: All alarms
sequence:
- service: media_player.volume_set
data:
entity_id: media_player.itunes
volume_level: 0.3
- delay: 00:00:01
- service: media_player.play_media
data_template:
entity_id: media_player.itunes
media_content_id: "alarms"
media_content_type: playlist
- service: switch.turn_on
data_template:
entity_id: switch.itunes_repeat_switch
This is very helpful. Thank you so much. At what point are the airplay endpoints being called into this. That’s specifically where I’m struggling. Getting the sound from iTunes to the speakers themselves.
Do you have an airplay speaker or airplay AV receiver? Or a dumb speaker connected to an Airport Express or Apple TV?
If so you go to your iTunes API page. Mine is at http://192.168.0.4:8181/
When you’re there click on ‘airplay devices’ and it will show you the ID for each airplay speaker you have. The id looks like aa-bb-cc-6e-73-fc. You use those IDs to turn on whichever Airplay speaker you want to turn on (using those curl commands from earlier), and then tell iTunes to start playing something.
ITunes can only play one song at a time, and it plays it on whatever speakers are turned on in the iTunes app.
So you turn on whichever speaker you want to use, or several of them if you like, and turn off the ‘computer’ speaker (otherwise the sound will also come out of your Mac).
I may take a few tries to figure it out, but I was able to get the ID’s. I can’t thank you enough. I’ve spent days trying to make this work.
Glad to hear it! Once you’ve got the IDs you can just test the curl commands in a terminal window. It’s instantaneous feedback, if you have iTunes open you’ll see the speaker checkboxes being ticked and sliders moving around etc.
As is most things HA, this has been a great learning experience for me and I greatly appreciate your help. I was able to get it working. I was actually able to get it working a couple of different ways. The problem is that it’s really slow. Do you find it to be slow as well?
It takes maybe 3 seconds for sound to start coming out of my speakers when I first click an HA button to start a playlist, but then clicking pause has maybe a half second delay, and clicking play again less than a second delay.
That’s consistent with my findings as well. The curl to play the file takes almost 2 seconds to complete. Thanks for confirming it’s not something on my end.
I don’t think that’s down to curl in particular, it’s iTunes connecting to Airplay speakers. If you sit in front of your Mac and click play (over Airplay speakers) it also takes 2 or 3 seconds for sound to be audible.
Thanks for the post. This is a lot easier than the other methods I came across.
I’m running HA as a docker image on macOS. The say command doesn’t work because it’s being executed in the docker image shell and not the osMac shell. I’m being pedantic for those who didn’t know that.
My solution was to use the curl command. I had written a web service for use with my SmartThings hub that invoked the say command or played some short sound files. It runs on the same macOS as my docker HA image and listens on port 8080 so curl 10.0.0.119:8080?speak=this+is+a+test would invoke the say command with the passed text. If you are programmer, writing a web service is pretty trivial. I wrote mine in Rudy. I know it’s also easy in Node.js.
A useful tool for researching is the CLI option in the macOS Docker Desktop. It brings up a shell in the docker image. That’s where I tested the curl command and later found out the ssh worked as well.
Update
Rather than use shell_command for the web service call above, I switched to a rest_command. Here’s an example:
rest_command:
say_test:
url: 'http://10.0.0.119:8080?speak=Someone+pressed+the+test+button'
Coming from another Mac-based home automation software I’ve been using Airfoil for ages, however I do not run HA on a Mac, instead I run it on a Pi. This is how I got easy TTS throughout the house using one of my Macs running Airfoil (can be done with Windows as well, but some tweaks would be needed):
I first set up SSH from my Pi to my Mac with a secure key by running this on my HA prompt (replace myname with your username on that Mac and the IP with your Macs IP:
ssh-copy-id [email protected]
With SSH secured I created a shell script on my Mac to run a dynamic TTS. This will be called from the Pi whenever I need to say something to my connected speakers (in my case HomePods). I created a file called ‘tts.sh’ and then `chmod 0755 tts.sh’:
say "$1"
My shell command config looks like this (the rest is as already outlined above for using in automation or whatever):
speak: ssh -i /config/ssh/id_rsa -o 'StrictHostKeyChecking=no' [email protected] '~/Documents/Scripts/tts.sh "{{text}}"'
In all it took maybe 5 minutes to get it all set up. I use similar Applescripts as stated above to start my Airfoil if it’s not running and then auto connect to my various HomePods.
As a little update in here, the latest iteration of the AppleTV integration will pair directly with HomePods and other AirPlay 2 devices. I haven’t tried it out yet but it might simplify some of the above
Sounds good. Have you tried playing a sound from a speaker and switch back? How long is the delay?
As above, there is a few-second delay for Airplay 2 speakers to connect, but it’s tolerable. From another post it sounds like something called Shairport-Sync might work but I haven’t mustered the energy to give it a shot and in any case the connection delay is likely the same.
For those of us with multiple Airplay 2 speakers, there doesn’t appear to be a way to select multiple speakers for simultaneous play in HA using the AppleTV integration so I’m still back & forth between applescripting to Airfoil and Better Touch Tool: this is only because Airfoil irregularly supports stereo pairs as Apple keeps changing things and I can’t figure out how to AppleScript Airfoil to connect with a stereo pair.
First world problems.
You can also use the Apple TV integration for this.
It’s broken at the moment and it works with a similar delay. But at least you don’t need any additional software.
But expect it to be buggy. I am not sure if it is Apple side or the integration, but it isn’t reliable.
For me I know play audio alerts directly from the Macs internal speakers. Not as nice, but works and is instantly playing.
This is what I can’t figure out… my model is that all of the output from the server needs to go to multiple Airplay speakers that may or may not be connected. I’ve given up on iTunes and mainly want to use Spotify and the iOS tts services playing to all speakers in sync. Did I miss a way to do that with the Apple TV integration?
In sync is probably not possible. But Maybe you can use Apple Shortcuts to do that and let it trigger by a HA entity in HomeKit.
As a little update here, Better Touch Tool wasn’t working when the screen locked (even with the server set to never sleep) so I went back to the drawing board and figured out how to send keystrokes to Airfoil to select speaker groups, change volume and mute. Very easy once one learns a little bit of AppleScript.
Stereo pairs now working too
#!/bin/bash
osascript <<EOD
tell application "Airfoil" to activate
tell application "Airfoil"
disconnect from (every speaker)
end tell
delay 1
tell application "System Events"
tell application "System Events" to keystroke "2" using command down
end tell
delay 2
tell application "Airfoil"
set (volume of every speaker) to 0.5
end tell
EOD
#!/bin/bash
osascript <<EOD
tell application "Airfoil" to activate
tell application "System Events"
key code 125 using {shift down, command down}
end tell
EOD
Yet another update- the above was NOT working with the screen locked, so I’ve come full circle and am back to something similar to my original example which for whatever reason DOES work with the screen locked. It also skips the step of disconnecting from everything before only connecting to the desired outputs.
shell_command:
play_to_br_hps: "osascript /users/username/.homeassistant/scripts/play_to_br_hps.scpt"
#!/usr/bin/osascript
tell application "Airfoil"
launch
activate
get every speaker
disconnect from (every speaker whose id contains "Left")
disconnect from (every speaker whose id contains "Right")
disconnect from (every speaker whose id contains "Ecob")
connect to (every speaker whose id contains "Bath")
end tell
-- make sure volume not muted
set vol to output volume of (get volume settings)
if vol < 20 then
set volume output volume 50
end if
delay 1