Virtual soundcard

Not sure if this is the right place to ask. Want to move my HA installation into ESXi instead of raspberry. Checked that all the USB devices (z-wave, BT) passthrough nicely.
The only problem i am facing - i have speakers directly connected to raspberry. For this to move into ESXi i bought a BT amplifier.
Was struggling to connect it - until i realised that my VM doesn’t have any soundcard (guess that is a cause for failure of the pair to the BT amp below)
So wonder which way to go? Is there any virtual sound device that can be added to linux for this config to work (Dummy sound card)? Or hacking vm to get an HDaudio? Or myabe other options?

pair 00:13:EF:EE:00:96
Attempting to pair with 00:13:EF:EE:00:96
Failed to pair: org.bluez.Error.AuthenticationFailed

You could make the old pi into a media player. Plex, Kodi or VLC /omxplayer

Well VLC would be a way to go. But tts is not currently implemented on remote instances.

I think if you put plex /kodi on pi from SD image it can take TTS but I am not sure. If it doesn’t need to be an image, I would like to try it out but I could find a stand alone version.

I use this work around but I don’t recommend it for beginners.

Do you push the mp3s to remote via ssh or use an nfs share? Mind to share the modified script? Looks like an interesting solution. I managed to connect the bt speaker via the pulseaudio without a soundcard. But the whole set up looks like the house made of cards.

I have two HA’s running, one main on a mac and another on a pi. The pi just has a script which copy’s the TTS file from a nfs share and plays it.

  ttsplaypi:
    sequence:
      - service: switch.toggle
        data:
          entity_id: switch.copy
      - service: switch.toggle
        data:
          entity_id: switch.play

then on the mac HA, when I use TTS, i run this script

  say_event:
    sequence:
      - service: tts.amazon_polly_say
        data_template:
          message: "       In 15 minutes {{states('sensor.eventmessage')}} begins today."
          cache: true
      - service: shell_command.ttsplaymac

here is the shell commands. (doorbell is a canned response)

shell_command:
  doorbell: python3 /Users/user/.homeassistant/mp3/doorbell.py
  ttsplaymac: python3 /Users/user/.homeassistant/mp3/play.py

the play.py programs is simply to turn on the script on the pi to copy and play the tts file.

import requests
import simplejson as json

url = "http://192.168.2.186:8124/api/services/script/turn_on"
data = {"entity_id": "script.ttsplaypi"}
headers = {'Content-type': 'application/json'}
r = requests.post(url, data=json.dumps(data), headers=headers)

you are correct that I setup a nfs shares to transfer the play.mp3 file but I just read that www directory works better.

Thanks for sharing looks great. Will try it out.

But just as an idea… If you have a second HA instance running why not let it render the TTS on it’s own?
I’m thinking like this. We have an HA master. And HA slave (plays only TTS and sounds).
So the idea is in action somehow call off an event which will trigger an automation on the slave HA.
And in that automation we put up TTS and whatever we want. The only question left - how to fire that event?