I’ve got an issue configuring Assist pipelines when dealing with the TTS section.
I’ve got Piper correctly set up as an App, with Wyoming protocol pointing at it. In Piper’s config I have set up a different speaker (speaker 1) and a lower voice cadence, and when I play TTS in automations, it properly plays audio respecting my settings.
The problem is when I use Piper from Assist. I set up the TTS section with Piper, and then I’m forced to select a language and a voice, which I do… but then when Assist plays its responses, it does so in the default speaker (0) with an ugly voice, ignoring my settings.
Is there a way that I can ask Assist to play through Piper using its default settings? Or maybe that I can inject additional parameters such as Speaker and Cadence? Moreover… how do I even edit Assist’s pipeline in yaml?
I think the right way would be to add a list with a choice of speakers (the dictionary is already contained in the system), but this is a complex change that will require not only changing the specified file, but also the frontend.