Create custom HA equivelant of Alexa Show utilizing Ada and AWS Polly for TTS

I’m trying to build multiple Alexa show replacements with Pi’s. I’ll try to explain the process below.

Satellite HA touch screen > use Rhasspy wake word like Jarvis > send STT to Ada > Ada performs the request > Ada’s TTS response goes to AWS Polly > Polly sends TTS audio feed back to originating HA terminal

Not sure if that is the exact process flow that is most efficient to do what I’m looking for. I’m a first time user and I’ve already spent 100 hours trying to learn how this all works. I built a lab HA with a NUC running HASS image with Rhasspy and integrated the two. I was able to successfully build the slots and sentences for voice commands to wake up and perform the commands I told it to. I’m having trouble figuring out how to spread it out over multiple devices though on top of using Polly.

I understand this is a massive venture but if I can at least get an idea of what to expect or just help to make a portion of this work, I can piece it together better. Anything would be much appreciated!