How to manually install Piper

Those, specifically, I don’t know, but you can check the addons Dockerfiles to reproduce out of docker.

Source code management is a bit messy atm, I must say. pip is used, but I can’t tell the source repository.

1 Like

You spin up more containers with your desired languages.

This is working setup of for both danish and english.

version: "4.0"

  wyoming-piper-en:
    image: rhasspy/wyoming-piper
    container_name: piper-en
    ports:
      - 10200:10200
    volumes:
      - ${DOCKERCONFDIR}/piper/english:/data
    command: --voice en-gb-southern_english_female-low

  wyoming-whisper-en:
    image: rhasspy/wyoming-whisper
    container_name: whisper-en
    ports:
      - 10300:10300
    volumes:
      - ${DOCKERCONFDIR}/whisper/english:/data
    command: --model tiny-int8 --language en

  wyoming-piper-da:
    image: rhasspy/wyoming-piper
    container_name: piper-da
    ports:
      - 10201:10200
    volumes:
      - ${DOCKERCONFDIR}/piper/danish:/data
    command: --voice da-nst_talesyntese-medium

  wyoming-whisper-da:
    image: rhasspy/wyoming-whisper
    container_name: whisper-da
    ports:
      - 10301:10300
    volumes:
      - ${DOCKERCONFDIR}/whisper/danish:/data
    command: --model tiny-int8 --language da
1 Like

I’m currently running HA on a RPI4, therefore I would like to keep the non-essential and CPU intensive tasks (frigate, tts, stt) on a separate host. It sounds like the Wyoming protocol will allow me to run piper and whisper on my Proxmox box so I don’t overload the Pi. Is there more documentation on this use case? Have others setup and tested this type of a configuration?

I tested configuration from @koying and this works with out any issue. I didn’t test configuration from @bjorn.sivertsen because I currently don’t have time do to it, but I will.
I don’t use pi because I think that you need a desktop comp for running ha and there is no need in for separate hosts if you are using docker. You can easily limit cpu and memory usage in docker like this

    shm_size: "64mb"
    mem_limit: "256m"
    mem_reservation: "256m"
    memswap_limit: "1024m"
    cpus: "1"
    cpuset: "2"

This is for my frigate config. My average cpu usage is about 23% running 22 containers on Intel(R) Core™ i3-4130 CPU @ 3.40GHz. For doc you can check around net, but this doesnt have a lot of impact on my system. Maybe around 0.5% to 1% of cpu usage.

this seems like a good place to post this…

I installed piper/whisper on a separate docker host and wanted to use the highest quality english model. it took 4 full CPU cores for abour 12 seconds before coming up with an answer to a short request.

I happened to have a beefy server avaiable, so I did some digging and found that you can passing this environment variable: OMP_NUM_THREADS=8 will allow the whisper process to use 8 cores instead of just 4.

I cant say it is twice as fast, but it is faster.

3 Likes

You mean “medium-int8”, right?
I also did similar tests, for French, and also hit some wall at 13/15s with various CPU.

yes, im using medium-int8. I’m at 12 cores now with beam-size 5, and i think i’m getting around 10 seconds. i might just throw an insane number of cores at it just to see what it does…

1 Like

Good to know that CPU’s are basically irrelevant :wink:

haha you can say that again. I just threw 100 CPU cores at it… it used %3000-%4000 CPU (30-40 cores) for a second or two, then settled aroung 800%… and still timed out, and then i saw in the container logs that i got an answer in 34 seconds ( 4 seconds after timeout)

Yeah. Maybe that beast wants quality rather than quantity. The CPU s I have are low-middle spec at best.

I’m reasonably sure that the container isn’t set up for GPU acceleration, but faster-whisper can use a GPU if it’s available. See the ctranslate2 docs: Installation — CTranslate2 3.13.0 documentation

I’m running my install on an old Mac Mini ATM; there is a CUDA replacement that works on Intel integrated GPUs but it’s only Skylake and later chips and mine’s too old. But if you’re running on a newer chip or have a real graphics card available in theory you should be able to offload whisper processing to the GPU.

Anyone have any luck installing these on Synology? I think I have Piper up and running, but Whisper keeps getting a core dump error.

/run.sh: line 5: 7 Illegal instruction (core dumped) python3 -m wyoming_faster_whisper --uri 'tcp://0.0.0.0:10300' --data-dir /data --download-dir /data "$@"

I’ve installed from the Docker Hub Rhasspy repo, mapped my port and volume data folder, and added the command: --model tiny-int8 --language en

The network is set in bridge mode. What am I doing wrong here?

Might be something else @guttermonk? I had no issues with basically the above docker-compose on a Synology DS920+.

I can get each of Whisper and Piper into HA via the Wyoming integration, but I haven’t had any luck from there yet. I’m getting a No text recognized (stt-no-text-recognized) error in the Assist Pipeline debug window. Not really sure how to debug from there as it could be a) the microphone (using an Atom Lite running Voice Assistant), b) the sound getting piped to whisper, c) whisper understanding it.

I get the same thing with both on a beefy PC, using a known working microphone.

Hi everyone, thanks for the examples and docker compose files :+1:

Do you know why the volumes are needed? Which data is stored and what is the downside if I don’t add a volume for it? So it will be deleted if I pull a new container?

I don’t want to store the data on the docker host. I could create a named volume for it, but I’m wonderingif it is even necessary.

i think you are right, the vol is there just so you dont need to re-download the data

1 Like

Thank you for the compose file. I know this isn’t HA related, but when I bring the containers up, they die after a few seconds because dns resolution fails. If I use docker run, they successfully download the files. After much googling, I tried adding a dns entry to my compose file, and even mapping my /etc/resolv.conf into the container, but no joy. I found a lot of questions referencing this, but not many answers.

That doesn’t make any sense to me.
Docker compose is the same as docker run, just based on a configuration file rather than setting the parameters on the command line.

Are you sure it’s a dns issue, and not a “file not found”?
Show the actual errors, please.

I had something similar happening. Are you using Adguard?

2023-06-04T13:40:20.177860612Z WARNING:wyoming_faster_whisper.download:Model hashes do not match
2023-06-04T13:40:20.177982277Z WARNING:wyoming_faster_whisper.download:Expected: {'config.json': 'e5a2f85afc17f73960204cad2b002633', 'model.bin': 'ecd0fd5e2eb9390a2b31b7dd8d871bd1', 'vocabulary.txt': 'c1120a13c94a8cbb132489655cdd1854'}
2023-06-04T13:40:20.177991675Z WARNING:wyoming_faster_whisper.download:Got: {'model.bin': '', 'config.json': '', 'vocabulary.txt': ''}
2023-06-04T13:40:20.177996951Z INFO:__main__:Downloading FasterWhisperModel.BASE_INT8 to /data
2023-06-04T13:40:30.192051696Z Traceback (most recent call last):
2023-06-04T13:40:30.192111478Z   File "/usr/lib/python3.9/urllib/request.py", line 1346, in do_open
2023-06-04T13:40:30.192807388Z     h.request(req.get_method(), req.selector, req.data, headers,
2023-06-04T13:40:30.192830854Z   File "/usr/lib/python3.9/http/client.py", line 1255, in request
2023-06-04T13:40:30.193292984Z     self._send_request(method, url, body, headers, encode_chunked)
2023-06-04T13:40:30.193308915Z   File "/usr/lib/python3.9/http/client.py", line 1301, in _send_request
2023-06-04T13:40:30.194060658Z     self.endheaders(body, encode_chunked=encode_chunked)
2023-06-04T13:40:30.194084034Z   File "/usr/lib/python3.9/http/client.py", line 1250, in endheaders
2023-06-04T13:40:30.194477126Z     self._send_output(message_body, encode_chunked=encode_chunked)
2023-06-04T13:40:30.194499898Z   File "/usr/lib/python3.9/http/client.py", line 1010, in _send_output
2023-06-04T13:40:30.194937016Z     self.send(msg)
2023-06-04T13:40:30.194959510Z   File "/usr/lib/python3.9/http/client.py", line 950, in send
2023-06-04T13:40:30.195377574Z     self.connect()
2023-06-04T13:40:30.195400020Z   File "/usr/lib/python3.9/http/client.py", line 1417, in connect
2023-06-04T13:40:30.195958570Z     super().connect()
2023-06-04T13:40:30.195981186Z   File "/usr/lib/python3.9/http/client.py", line 921, in connect

But like I said, it runs and downloads the model just fine using docker run