Rhasspy Announcements

Tags: #<Tag:0x00007f7399630628>

I do not know, but I have some spare time as well so I will check if I can fix it.

1 Like

I have made a clean install of Hassio and installed Rhasspy from scratch (
Same issue, I could also reproduce with a small test addon with the same posting to hassio API.
I don’t know what is causing that, the code follows the documentation on this…

1 Like

I wonder if they had a regression, either in code or in the docs? Either way, thanks for the update! It’s pushed to the add-on now, so let’s hope all is well :slight_smile:

New version ( works on my systems!


Version 2.4.16

Some important changes in this version of Rhasspy on the way to 2.5:

  • Numbers and number ranges are now supported:
    • Putting 75 in your sentences.ini file will produce the integer (not string) 75 in your JSON event
    • Putting 1..100 will generate “one” to “one hundred” for your language, and put integers in the JSON
  • Built-in and custom converters can be used to convert named entity values to something besides strings in your JSON events.
  • Slot programs were added, allowing you to generate slot values during training with a custom program. The $rhasspy/number program does this for number ranges (0..100).
  • $rhasspy/days and $rhasspy/months slots are available now for every language (days of the week, months of the year)
  • Lots of bug fixes and great community contributions!

This is the first step to having built-in entities available in Rhasspy. In the future, Kaldi-based profiles will be able to deal with them natively. Other speech systems, like Pocketsphinx or DeepSpeech (someday) will fall back to these new methods.

Also, a big thank you to all of the community members who have stepped up to provide technical support, bug fixes/code clean-up, and documentation/tutorials! :clap:


Version 2.4.17

This version largely addresses issues discussed here and on GitHub. The road to version 2.5 is long, but we’re slowly making it.


  • Button to web UI to play last recorded voice command
    • Only plays the last command recorded through the web UI
    • Should help debug volume issues
  • RHASSPY_LOG_LEVEL environment variable
    • Overrides --log-level command-line option
    • Pass “debug”, “info”, “warning”, “error”, etc.
  • Web UI feedback during download
    • Small text box with status of files downloading/extracting
  • Add “asoundrc” config option to Hass.IO add-on
    • When set, contents of the config setting are copied to /root/.asoundrc when the add-on starts


  • Moved $profile/kaldi/custom_words.txt to $profile/kaldi_custom_words.txt
    • Avoids overwriting your Kaldi custom words when re-downloading profile
    • Your current file will be moved automatically. Make sure to include kaldi_custom_words.txt in your backups!
  • Slot substitution casing is kept during training/recognition
    • A slot value like abc:AbC wil show up as AbC in the JSON event now
  • Fixed fuzzywuzzy and other intent recognizer training after addition of converters
  • Fix thread max count issue
  • Hide web UI alerts after 10 seconds
  • Delete partially downloaded profile files (on error)
  • Force slot programs to run each training cycle
    • Previously, output would be cached
  • Fix _raw_text in Hass event being same as _text
    • Lets you pass open transcription text through to HA

Version 2.4.18 Released

Hi, everyone! I realize a lot of users are patiently waiting for 2.5, but I still want to keep 2.4 fed and watered :slight_smile:

The two changes people will probably notice the most in 2.4.18 are (1) the addition of a new /api/events/wake websocket endpoint for reacting to a wake word detection, and (2) speaking a voice command will cause the intent to show up in the web UI (no need to use the buttons). Enjoy!


  • /api/listen-for-wake accepts “on” and “off” as POST data to enable/disable wake word
  • /api/events/wake websocket endpoint reports wake up events
  • /api/events/text websocket endpoint reports transcription events
  • Rhasspy logo changes in web UI when wake word is detected
  • espeak arguments list for text to speech


  • STT output casing is fixed outside of HTTP API calls
  • All voice commands show up in web UI test page
  • Play last voice command button in web UI works for any command
  • Fixed commas in numbers with thousand separators
  • Words from Pocketsphinx wake keyphrase are added to dictionary
  • Pocketsphinx wake word keyphrase casing is fixed

Version 2.4.20 Released

While Rhasspy 2.5 is still in pre-release, I’m still issuing maintenance releases of 2.4.

This release includes libasound2-plugins in the main Docker image, to keep up with Hass.io’s changes to audio input. Other changes include:



  • Properly accept websocket connections
  • Don’t error out on missing porcupine files
  • Fix rawValue in MQTT messages

Hi @synesthesiam

Congrats. Your project is what i was looking for. I just started making my way in HASS.io and this is a must have add on.

I’d like to thank you for your effort, i think you are driven by the challenge. Good!

My setup of HASS is running in one VM image (hass.vmdk) downloaded from HASS.io and installed in Proxmox on my i5 NUC. So i do not have connection to the physical hardware. Nor do i have micro near the Server.

Steps to have HASSIO in PROXMOX VM: 
    1. Download hass.vmdk from HASS.io

    2. Copy hass.vmdk into your PROXMOX

    3. Run this command

    qm importdisk 106 hass.vmdk local-lvm

    4. Create one dummy VM by following this tutorial and placing the disk you just converted on that.


I do not have a RASP PI, so i tried (without success) to run rhasspy (docker) in another Ubuntu VM that has audio thru windows.

I believe that having RASP PI is one option, but can add up to be expensive if you plan to have it installed in each room of the house for commands recognition.

I also looked into ESP32 boards. They are cheap and programmable.

This may be a long shot, but my idea is to have ESP32 with audio in and audio out modules relaying the audio into rhasspy installed directly in hass.io or in another debian image running in one VM.

The layout would be something like this:

ESP32_1 = Audio detection -> packet/audio sent to Rhasspy MQTT
ESP32_2 = Audio detection -> packet/audio sent to Rhasspy MQTT
ESP32_3 = Audio detection -> packet/audio sent to Rhasspy MQTT

I have mosquitto MQTT installed in the HA.

Sign me up for your testing on this platform slight_smile:

Everybody, feel free to get in touch if you share my vision.

Many thanks!

1 Like

I think this is definitely possible. MQTT is the best option if you have multiple sites and need to keep the streams separate.

You could also try the GStreamer UDP set up in Rhasspy. Then, you could just stream raw UDP audio in from the ESP32. There’s still some work to be done to add multiple UDP streams into Rhasspy, though. I’d be interested in your thoughts on this :slight_smile:

In case folks aren’t aware, Rhasspy has its own forum site these days: https://community.rhasspy.org/

I still post here occasionally, but most of the activity is over on the Rhasspy forums :slight_smile:

Hi @synesthesiam,

Thank you for your reply. I wasn’t aware of that community. I’ll join and try to update myself with current state of the project.

Still trying to master the HA beast. :slight_smile:

I believe I’m actually running a virtual apliance, if i’m right. It’s updated to version 0.113.3 last night updated to 0.116.2 and lost floorplan card i started playing with.

So that 's where the no hardware to connect has to be solved. And actually to be practical. one listening device on the main room(s) is better suited for me that carrying the device around.

And my order for ESP32+MHT I2S Micro has now arrived. Urray!


1 Like