Hello,
I was wondering if anyone could help me voice preview works completely fine with piper.
i was previously running Koroko-FastAPI (uses openAI API to interact)
then intergrating it to HA with [
sfortis/openai_tts GitHub - sfortis/openai_tts: OpenAI TTS custom component for HA
With this i was able to output TTS using kokoro model to my google home mini.
When i try use this to output to voice preview nothing come out at all.
Anyone have any experiece with this or anywhere i should start? i would have thought as HA was already able to use the TTS pipeline it would have just worked on the voice preview fine.