Make Airplay speakers the voice of HA & other fun MacOS tricks

This is a followup to my post about installing HA on MacOS here.

Since then I’ve figured out a few fun tricks that leverage the MacOS and its ecosystem, and having benefited from a lot of kind individuals helping me get un-stuck while building my HA, I feel like putting this out there is a way to give back. Some of this may be possible on the venerable Pi, but what I’ll describe here is tailored for Catalina.

  1. Make HomePods (or any other Airplay speakers) the voice of HA
    I was playing around with tts/media player components and then finally realized that using shell commands in HA that call the afplay and say built-in utilities I could do what I wanted. You can even specify the accent and gender of the voice using the - v flag with say. Steps:
  • go to system preferences -> accessibility -> speech and download the voices that you want to use.
  • put some mp3s in www/sounds/
  • Install Airfoil, which I leave running all the time. This lets you use an applescript to connect the desired speakers as needed (see below). One item of note: Airfoil doesn’t currently support HomePod stereo pairs, so it seems to randomly pick one of the two in the pair… please go to Rogue Amoeba’s site and request this feature!
    EDIT: stereo pairs are now working!
  • Use Bonjour browser (now called “Discovery”) or whatever else to find the MAC addresses of your HomePods/speakers.
  • Here we’re going to use a bash script to call some AppleScript to control Airfoil (in my case called connect_homepods.sh); put it in www/scripts. This will reconnect your speakers if someone, say, connected to them from a phone or something.
#!/bin/bash
osascript <<EOD
  tell application "Airfoil"
    connect to (every speaker whose id starts with "XXXXXXXXXX1")
    connect to (every speaker whose id starts with "XXXXXXXXXX2")
    set (volume of every speaker whose id begins with "XX") to 0.4
  end tell
EOD
  • put your commands in configuration.yaml like so:
shell_command:
  connect_homepods: "/users/username/.homeassistant/www/scripts/connect_homepods.sh"
  say_no_moleste: 'say -v paulina "No moleste per favor."'
  say_panic: 'say -v daniel "Attention. Someone pressed the panic button."'
  play_woof: 'afplay "/users/username/.homeassistant/www/sounds/woof.mp3"'
  play_announce: 'afplay "/users/username/.homeassistant/www/sounds/tibetanbowl.mp3"'
  • Now put it together in scripts or automations with a slight delay to allow the HomePods to connect- you may have to tailor the timing to your network:
say_morning:
  sequence:
  - service: shell_command.connect_homepods
  - delay:
      seconds: 0.5
  - service: shell_command.play_announce
  - service: shell_command.say_morning
  • Bonus round: put an input_text field in lovelace where you can type something and your HA will speak!
shell_command:
  say_this: 'say -v kyoko {{ states ("input_text.say_this") }}'

This automation will run the script any time you hit return after changing the contents of the input_text:

- id: id 19
  alias: say this
  trigger:
  - entity_id: input_text.say_this
    platform: state
  condition: []
  action: 
  - service: script.say_this

  1. Start playing a Spotify playlist to Airplay speakers with one click. Here I use a different external script to specify a different volume to Airfoil (because Spotify also has a volume setting).
holiday_music:
  sequence:
  - service: shell_command.connect_homepods_music
  - delay:
      seconds: 0.5
  - service: media_player.select_source
    data:
      entity_id: media_player.spotify_user
      source: "mini"
  - service: media_player.volume_set
    data:
      entity_id: media_player.spotify_user
      volume_level: 0.75
  - service: media_player.play_media
    data:
      entity_id: media_player.spotify_user
      media_content_type: playlist
      media_content_id: "spotify:playlist:37i9dQZF1DX2zhLcnFr1qI"
  - service: media_player.shuffle_set
    data:
      entity_id: media_player.spotify_user
      shuffle: true

  1. Leverage external scripts to restart things- call these with shell commands as above
  • Restart your Mac (replace with your Mac username and password on the “send” lines). You won’t need to interact with this script.
#!/usr/bin/expect -f

set timeout -1
spawn fdesetup authrestart
expect "Enter the user name:"
send -- "username\r"
expect "Enter the password for user \'username\':"
send -- "password\r"
expect eof
  • restart an ASUS router using a key:
#!/bin/bash

ssh -i /Users/username/.homeassistant/ssh/asuswrt_key [email protected] -p <portyouareusing> 'busybox reboot'

You now have the power to scare intruders with barking dog sounds, get your playlist going with one click, and annoy the bejesus out of your sig other with annoying sounds. Just be aware that a lot of this came about through trial & error, so your mileage may vary. Enjoy!

14 Likes

Thanks for this! I run hassio and currently simulate tts to Airplay speakers by playing mp3s via iTunes and maddox’s iTunes API but it doesn’t work on Catalina. I had to move my itunes library to an old Mac when I upgraded to Catalina on my main one. This will useful when the API stops working.

1 Like

hi would these shell commands work if home assistant is running on a pi not the mac with airfoil installed ?

It probably depends on the OS you’re running on your Pi, but I don’t think it would be easy.

Back when I was on a Pi I experimented with a different option called forked-daapd; give that one a go if you’re feeling adventurous, but I could never get it to work.

Sorry I was not clear
I want to control airfoil running on a Mac
From my home assistant pi running hassio ( both on he same wired network

Not try to run airfoil on the pi

In that case YES. Create the script on your Mac and call it from the Pi using a shell_command that calls the script via SSH to your Mac. This would look a lot like the script above where I ssh into my ASUS.

Thankyou I will take a look at that tomorrow then

@fmon very delayed replay for this
i got the apple scrips working on the mac
but have not got very far with the shell commend format to send to the mac to trigger it.
would you be able to send an example ?

Sure, the first step is to set up a public/private key pair between the server (Mac in this case) and the client (your pi). If you haven’t already, you can practice this by seeing if you can set up passwordless SSH between whatever computer you’re using to program the Pi and the Pi itself (assuming you’re running the Pi headless). Use ssh-keygen or something similar to create the pair and read up on where to install them in your particular system (or in your terminal emulator). Then when you go to SSH to the Pi you won’t have to enter a password. Once you understand that, do the same thing between the Pi and the Mac.

Then you should be able to send commands to the Mac like this where the IP and port refer to the Mac:
ssh -i /pathtoprivatekeyonpi/your_key [email protected] -p <portyouareusing> 'put command here'

The port will be 22 if you’re connecting inside your network or whatever you’ve set up in your port forwarding if outside. Again, I’m no expert, and setting that up requiring some trial & error, but it’s doable. You’ll also have to turn on remote login in the Mac sharing settings.

Also check this out, which I just discovered. It might simplify a lot of that… and maybe even obviate having to use the AppleScript scripts to begin with. Very cool idea

Thank you for your help.
I have got Airfoil up and running in home assistant
using shell commands to BetterTouchTool then apple script to airfoil
controlling both speaker selection and input control
only thing left to do is work out volume control but that is for another day !

1 Like

Hey, based on this thread, it seems that it is still possible to use Maddox’s iTunes API on Catalina and Music for MacOS… Would love to hear if it works for you.

Hey that’s great to know, thanks! I’m still thinking about whether to try it right now. I’ve gone to the trouble of moving my iTunes library to an old Mac now. It turns out to be pretty nice not to be running iTunes on my main Mac all the time! But the old Mac is now awake much more than it used to be and using much more electricity so I’m monitoring the situation. Plus the Catalina Music interface is all new and weird :rofl:

Yeah I fantasize about moving everything to forked-DAAPD and be done with iTunes, but I still buy things on the iTunes Store and don’t have time for the extra hassle.

I went Spotify and never went back

After spending 35 years building up the more or less perfect music library for me, there is zero advantage of doing that for me. All I would be doing would be giving some company the ability to destroy my music collection.

That’s exactly what happened to me when I turned on iCloud music library. I could have gone through the trouble to reconstruct my iTunes library but I just gave up.

Yep those bastards did that to me too with iCloud music library. I think it was intentional.

I’m working on doing this now. Can you show me what your automation looks like? My iTunes server works, and my endpoints are discovered, but I can’t seem to run automations with my endpoints.

I’m playing a fake TTS to airplay speakers like this:

switch:
- platform: command_line
    switches:
      internet_outage_detected:
        command_on: "curl -X PUT http://192.168.0.4:8181/airplay_devices/40-3c-fc-07-68-36/on; curl -X PUT http://192.168.0.4:8181/airplay_devices/40-3c-fc-07-68-36/volume -d level=40; curl -X PUT http://192.168.0.4:8181/playlists/spoken-text/play"
        command_off: "curl -X PUT http://192.168.0.4:8181/pause"

The first curl turns on an Airplay speaker, the 2nd sets the volume, the 3rd plays a iTunes playlist.
‘spoken-text’ is the name of an iTunes playlist, which contains only one mp3, the TTS message.

I prefer a robotic-sounding voice so I generate the mp3 by typing the message in TextEdit and then converting to mp3 using OS X speech services. I mainly only use this when the internet goes down and normal TTS won’t work.