Rhasspy offline voice assistant toolkit

Rhasspy is an offline voice assistant toolkit for Home Assistant. It was inspired by Jasper, but uses more modern libraries and tools (e.g., spaCy , snowboy).

You install rhasspy by adding custom components to Home Assistant (wake word detector, speech to text, etc.) and then train it on your own tagged sentences. This customizes the speech recognizer (pocketsphinx) and intent recognizer (rasaNLU), and then lets you handle intents with intent_script right inside Home Assistant to do whatever you want.

I’m in the process of documenting everything, but I hope there’s enough for people to get started and see if it’s useful for them. It should be possible to use it on a Raspberry Pi 3, on a desktop/laptop, or in a client/server model where the heavy lifting (speech/intent recognition) is done on a server while wake+recording happens on a Pi.

There are a few things I plan to add in the near future:

  • More documentation and quick starts
  • A way to automatically generate training sentences for all the switches in your configuration.yaml file (these work with the built-in HassTurnOn/Off intents)
  • An intent handler that lets you ask about the state of any named sensor (e.g., “Is the garage door open?”)
  • A chat plugin for driving rhasspy via text instead of speech from your phone (probably using matrix chat).

Let me know if you have any questions or comments!

15 Likes

Looks very interesting. Would it be able to support other languages as well such as Dutch?

Yes! It should work with any language that both pocketsphinx and spaCy support (which appears to be English, Spanish, German, Dutch, French, and Italian).

You’ll need to grab an acoustic/language model and dictionary from the CMU Sphinx download section, and also download the Dutch spaCy language model from the available models page (running python3 -m spacy download nl may also work).

When you configure the rhasspy_train Home Assistant component, make sure to point it at your acoustic/language models and dictionary. Then, edit the config_spacy.yml file in rhasspy_assistant/etc/rasa and change the language to "nl".

I’d be very interested to know if this works for you.

I’d be very gladly testing this out with you since I have been waiting for Snips to release an update supporting other languages, but that is not yet on the roadmap the coming months I think.

The things they discussed on their latest blog posts such as their Snips AIR hardware seem awesome, but is not yet here, so hopefully Rhasspy can help us out :blush:

We have some travel coming up the next week and a short holiday break, but when I have some time, I will start setting things up remotely. The PS3 microphone has been mentioned on several occasions now, so that one is ordered and on its way.

The AIR hardware looks pretty neat! I wonder if it can run Home Assistant since it’s Debian based. The PS3 eye is surprisingly good for its price; I’ve also looked into some of the products from Respeaker.

When you download cmusphinx-nl-5.2.tar.gz, etc/voxforge_nl_sphinx.dic is your dictionary file, etc/voxforge_nl_sphinx.lm is your language model, and model_parameters/voxforge_nl_sphinx.cd_cont_2000/ is your acoustic model directory. So your configuration should look something this (not tested):

  • rhasspy_train
    • dictionary_files = [etc/voxforge_nl_sphinx.dic]
    • language_model_base = etc/voxforge_nl_sphinx.lm
  • stt_pocketsphinx
    • acoustic_model = model_parameters/voxforge_nl_sphinx.cd_cont_2000/
    • language_model = etc/voxforge_nl_sphinx.lm
    • dictionary = etc/voxforge_nl_sphinx.dic

please help
you have great project
I have done and I configured
the error is as follows:
Component not found: rasa_nlu

Thanks for trying it out! Please check your Home Assistant log for installation errors regarding the rasa_nlu component. The rasa_nlu Python library is one of the tougher ones to get installed, because it depends on spaCy (which can take quite a while to build). Also, did you make sure to install a language model after everything else was done (python3 -m spacy download en)?

If all else fails, you can manually install rasa_nlu into your virtual environment with python3 -m pip install rasa_nlu[spacy] and watch the build process directly. Then start up Home Assistant and see if there are any other problems.

I’m installing on HassOS 1.7
Can you help me
Do you use team viewer?

Sorry, I haven’t used teamviewer and I’m not very familiar with using Home Assistant on HassOS, but I’ll try to help.

Were you able to copy the Python files from custom_components to your Home Assistant’s config/custom_components directory? The error suggests it can’t find any of the code files.

I have to follow your instructions


I have to follow your instructions
You can access my computer
configuration help me?

Hmmm…could you try copying the .py files directly into custom_components instead of inside another rhasspy-assistant directory?


I have followed your instructions reconfigured
I started to see that error

Ok, thanks for trying. Does HassOS produce a log that we could see? I’m surprised that wav_aplay doesn’t load especially, since it has no dependencies (besides aplay being in the PATH).

I checked and rebooted
and i delete nlu in my configuration
then the report is valid and I restarted the message as follows


Here is my log file

Are you able to upload the entire text log?

After some more searching, it looks to me like Hass.IO only supports “official” components and dependencies. Rhasspy absolutely needs some packages to be installed via apt-get before it will function, so you may need Hassbian to make this work for now.

ok ,thank you
I’m used to it Hass.IO
do not want to move Hassbian

Thank you! That sounds very interessting…

I am running it in an Ubuntu VM on a NUC where also Home Assistant is installed. As microphone I attached a Jabra Speak 510 USB device.
I followed your single machine guide and outside of HA I can record with arecord and play files with aplay.
In the UI I have the hotword listening (haven’t used your customizing):

image

No error mesages in the log, but nothing happens when I speak. I would expect at least a confirmation sound after hotword detection (entries are valid in automations.yaml).
I guess my hotword_precise service is running according to the UI.
So maybe there’s something wrong with my sound setup.
Is there a possibility to check if speech input is being sent to the hotword_precise service?