Make Airplay speakers the voice of HA & other fun MacOS tricks

Sorry I was not clear
I want to control airfoil running on a Mac
From my home assistant pi running hassio ( both on he same wired network

Not try to run airfoil on the pi

In that case YES. Create the script on your Mac and call it from the Pi using a shell_command that calls the script via SSH to your Mac. This would look a lot like the script above where I ssh into my ASUS.

Thankyou I will take a look at that tomorrow then

@fmon very delayed replay for this
i got the apple scrips working on the mac
but have not got very far with the shell commend format to send to the mac to trigger it.
would you be able to send an example ?

Sure, the first step is to set up a public/private key pair between the server (Mac in this case) and the client (your pi). If you haven’t already, you can practice this by seeing if you can set up passwordless SSH between whatever computer you’re using to program the Pi and the Pi itself (assuming you’re running the Pi headless). Use ssh-keygen or something similar to create the pair and read up on where to install them in your particular system (or in your terminal emulator). Then when you go to SSH to the Pi you won’t have to enter a password. Once you understand that, do the same thing between the Pi and the Mac.

Then you should be able to send commands to the Mac like this where the IP and port refer to the Mac:
ssh -i /pathtoprivatekeyonpi/your_key [email protected] -p <portyouareusing> 'put command here'

The port will be 22 if you’re connecting inside your network or whatever you’ve set up in your port forwarding if outside. Again, I’m no expert, and setting that up requiring some trial & error, but it’s doable. You’ll also have to turn on remote login in the Mac sharing settings.

Also check this out, which I just discovered. It might simplify a lot of that… and maybe even obviate having to use the AppleScript scripts to begin with. Very cool idea

Thank you for your help.
I have got Airfoil up and running in home assistant
using shell commands to BetterTouchTool then apple script to airfoil
controlling both speaker selection and input control
only thing left to do is work out volume control but that is for another day !

1 Like

Hey, based on this thread, it seems that it is still possible to use Maddox’s iTunes API on Catalina and Music for MacOS… Would love to hear if it works for you.

Hey that’s great to know, thanks! I’m still thinking about whether to try it right now. I’ve gone to the trouble of moving my iTunes library to an old Mac now. It turns out to be pretty nice not to be running iTunes on my main Mac all the time! But the old Mac is now awake much more than it used to be and using much more electricity so I’m monitoring the situation. Plus the Catalina Music interface is all new and weird :rofl:

Yeah I fantasize about moving everything to forked-DAAPD and be done with iTunes, but I still buy things on the iTunes Store and don’t have time for the extra hassle.

I went Spotify and never went back

After spending 35 years building up the more or less perfect music library for me, there is zero advantage of doing that for me. All I would be doing would be giving some company the ability to destroy my music collection.

That’s exactly what happened to me when I turned on iCloud music library. I could have gone through the trouble to reconstruct my iTunes library but I just gave up.

Yep those bastards did that to me too with iCloud music library. I think it was intentional.

I’m working on doing this now. Can you show me what your automation looks like? My iTunes server works, and my endpoints are discovered, but I can’t seem to run automations with my endpoints.

I’m playing a fake TTS to airplay speakers like this:

- platform: command_line
        command_on: "curl -X PUT; curl -X PUT -d level=40; curl -X PUT"
        command_off: "curl -X PUT"

The first curl turns on an Airplay speaker, the 2nd sets the volume, the 3rd plays a iTunes playlist.
‘spoken-text’ is the name of an iTunes playlist, which contains only one mp3, the TTS message.

I prefer a robotic-sounding voice so I generate the mp3 by typing the message in TextEdit and then converting to mp3 using OS X speech services. I mainly only use this when the internet goes down and normal TTS won’t work.

Thanks so much for responding. Are you using any automations with media_player.play_media ? I’m really just trying to make a door chime sound on my homepods when the door opens via the iTunes-api integration.

I do use all speakers, including the airplay ones, in my alarm siren automation like this:

  alias: All alarms
  - service: media_player.volume_set
      entity_id: media_player.itunes
      volume_level: 0.3
  - delay: 00:00:01
  - service: media_player.play_media
      entity_id: media_player.itunes
      media_content_id: "alarms"
      media_content_type: playlist 
  - service: switch.turn_on
      entity_id: switch.itunes_repeat_switch

This is very helpful. Thank you so much. At what point are the airplay endpoints being called into this. That’s specifically where I’m struggling. Getting the sound from iTunes to the speakers themselves.

Do you have an airplay speaker or airplay AV receiver? Or a dumb speaker connected to an Airport Express or Apple TV?

If so you go to your iTunes API page. Mine is at

When you’re there click on ‘airplay devices’ and it will show you the ID for each airplay speaker you have. The id looks like aa-bb-cc-6e-73-fc. You use those IDs to turn on whichever Airplay speaker you want to turn on (using those curl commands from earlier), and then tell iTunes to start playing something.

ITunes can only play one song at a time, and it plays it on whatever speakers are turned on in the iTunes app.

So you turn on whichever speaker you want to use, or several of them if you like, and turn off the ‘computer’ speaker (otherwise the sound will also come out of your Mac).