Thanks Michael, appreciate your work on the project. Not sure what exactly happened, but I re-built a new Docker container, and it works fine just as it did prior to the upgrade. No changes to the existing automations that did include text in the TTS service call.
Please can you clarify
Is it possible to install piper and whisper outside HA installation ??
Like on docker containers on a separate NAS ???
Or these 2 containers must run at the same instance HA is running ???
Yes, it’s possible. The easier way is to use official docker image and you can run it on another host if you want, just specify the right IP address and ports when you will configure the integrations.
You can use this working solution based on docker compose or run the containers manually :
@brenard thank you for the prompt reply.
I added all three entries under Wyoming Protocol (whisper, piper, openWakeWord.
Although is the wrong topic, please allow me to tell you my experience.
After I installed the above, I run under /config/voice assistants and I installed a new assistant named Travis. The language is Greek.
Please note that my docker compose file incluses greek language it has been installed properly.
I was pretty excited because I went quiet far…
I also installed Assist Microphone so I could test locally through my PC’s usb microphone.
What a disappointment!
Although Wyoming Protocol procedure run smoothly and I even managed to install docker images as well, the Assist really put me down. It doesn’t work
(when I use Home Assistant cloud it does work!)
But when I switch to the Travis voice assistant which I just created, it doesn’t understand a single word when I type.
(please note that I have add aliases for the specific entity.
When I try to use the microphone it’s a nightmare
I really can’t figure out.
Every body talks about a voice assistant that runs local but you have to install duckdns …
To have good results you have to use a bigger model for whispper :
In french, I start to have good results with the medium-int8 model but it’s take more ressources and more times to handle requests. May be, new models will increase performances in future…