Awesome! I haven’t gotten a lot of feedback on the French profile, so I’m interested to know how it works for you
I’m working with intents right now, so far everything works flawlessly. I think I had a little less success with OpenFST than with fuzzywuzzy.
I’m looking for some Home Assistant / Rhasspy integration example. Is there any automation.yaml file involving Rhasspy on GitHub ?
One other thing, I’m planning to build “satelites” and use my MQTT to transfer voice and sound. For that, I would like to deploy Hermes on a Pi 3 and a Pi 0 W. Is there any better way to do that ? Or architectures ressources for Rhasspy ?
If you want a mulitroom audio, check into snapcast. There is an addon for it, I used it but now I have moved so I need to set it up again.
- Add this repo to you addon store: https://github.com/bestlibre/hassio-addons/
- Install Mopidy and Snapcast addon
- Install snapclient on your satellite Pi’s and point to the snapserver.
- You also need a media player (snapcast and mpd) on HA, you can use this: https://www.home-assistant.io/components/mpd/ and this: https://www.home-assistant.io/components/snapcast/, both are needed
For automations, create them with triggertype event and your eventype will be event_type: rhasspy_Intent.
See my example here: Rhasspy offline voice assistant toolkit
Thanks a lot! This is exactly what I needed !
Edit : Actually I think this is only half the answer to my problem, I’m looking for a solution to stream intent (catch by a microphone) from a Pi or something else to Rhasspy. Is there any way to do that. I would prefer a solution without Snips.ai but If I have to that the way I’ll go
Edit 2 : I plaied a little bit with Rhasspy last night, and I think the best thing to do is to deploy Rhasspy on several Pi and route the speech recognition to a Rhasspy main server, which will centralize all intents. Am I right ?
@Romkabouter I see that you also got something that might be an answer to my issue in your GitHub https://github.com/Romkabouter/Matrix-Voice-ESP32-MQTT-Audio-Streamer
I thought you already had something for your satellites, sorry about that.
You can use my streamer, but then you need a Matrix Voice.
You can also use another Pi with Mic, and use Koen’s HermesAudioServer
I could not get that Modipy addon running again om my nu HA setup by the way, strange because I have used it.
When I have found the issue I will share it.
Thanks again, that what I need as I already own a Respeaker. I’m gonna try that this week end. BTW, buying a Matrix Voice could be a solution that I won’t exclude. From what I understand, Matrix Voice ESP32 is a standalone “Arduino like” that can be plugged on Rpi?
Yes, you can flash the ESP32 via the Pi or even OTA.
Turns out the Mopidy and Snapcast is working fine
I had made a mistake in my client config.
I have this in the addon config section:
{
"local_scan": true,
"options": [
{
"name": "soundcloud/enabled",
"value": "false"
},
{
"name": "spotify/enabled",
"value": "true"
},
{
"name": "spotify/username",
"value": "username"
},
{
"name": "spotify/password",
"value": "password"
},
{
"name": "spotify/client_id",
"value": "clientid"
},
{
"name": "spotify/client_secret",
"value": "mysecret="
},
{
"name": "gmusic/enabled",
"value": "false"
}
]
}
My snapclient is set to connect with the host ip adres running HA (and thus the addon)
I have several clients now, synchronized audio playback
After long time without being able to spend some time on Rhasspy. I tried hermes-audio-server last night. I can see my speech being routed on mqtt but I can’t see any logs on Rhasspy related to this. Rhasspy tell me that mqtt is enabled and connection is working fine btw.
Could you share your profile if you’re using Matrix Voice ? I think that could be helpfull.
In other hand, I’ve edited my profile.json using nano. I saw that my “site_id” was not set to “default” using only the web ui. I don’t know if this a bug or not.
Sure I can, I am in the process of setting up Rhasspy in my new home.
Can you explain a bit more in detail about your setup?
- where do you have Rhasspy installed?
- where is your MQTT broker installed?
- what are you using for satellite?
that kind of things
Hi,
My home automation system and services look like this :
Right now the green arrow seems to work just fine. But the orange arrow, seems to be ok as the connection work (At least, Rhasspy tell me that it works) BUT Rhasspy seems to just doesn’t care about my Hermes topics.
I’m not at home right now, but as far as I’ve looked Rhasspy doesn’t seem to give any log related to a MQTT hermes message
Edit: My profile.json file could be the problem, but I could not go further last night
Nice diagram
Don’t know what your settings are about the audio input, but this is what is should be:
- audio input should be to HERMES: https://rhasspy.readthedocs.io/en/latest/audio-input/
- site_id should match
Well… regarding my json profile I think I’m good as my microphone is “Hermes” :
{
"command": {
"system": "dummy"
},
"handle": {
"system": "hass"
},
"home_assistant": {
"access_token": "nope",
"url": "http://192.168.0.101:8123"
},
"intent": {
"fuzzywuzzy": {
"min_confidence": 0.8
},
"system": "fuzzywuzzy"
},
"microphone": {
"system": "hermes"
},
"mqtt": {
"enabled": true,
"host": "192.168.0.105",
"password": "nope",
"site_id": "default",
"username": "athena"
},
"sounds": {
"system": "hermes"
},
"speech_to_text": {
"pocketsphinx": {
"min_confidence": 0.8
}
}
}
And everything is published on the mqtt topic : “hermes/audioServer/default/audioFrame”
Is there any client that could stream audio directly from mqtt that I could use ? Right now I can’t really listen to what is published on mqtt.
Could you restart Rhasspy and post the output from the Log tab of Rhasspy from startup until after you tried to communicate using Hermes Audio Server? There should be something in the logs that can give us a clue.
This is my profile for Dutch:
{
"handle": {
"system": "hass"
},
"microphone": {
"system": "hermes"
},
"mqtt": {
"enabled": true,
"host": "192.168.43.54",
"password": "password",
"site_id": "matrixvoice",
"username": "username"
},
"sounds": {
"system": "hermes"
},
"wake": {
"system": "porcupine"
}
}
I think only the diffs from the default profile.json are in there.
Hi Guys I created a container and ran the Rhasspy but it’s giving me the following !!
Tried downloading it, Recreated the container but still no luck !!
Can any one please help and advice how to proceed further at this point !!
I’m running on Buster Raspberry Pi 4 4GB Variant
What’s your docker run command ?
docker run -d -p 12101:12101 \
--restart unless-stopped \
-v "$HOME/.config/rhasspy/profiles:/profiles" \
--device /dev/snd:/dev/snd \
synesthesiam/rhasspy-server:latest \
--profile en \
--user-profiles /profiles
Could you try :
docker run -d -p 12101:12101 \
--restart unless-stopped \
-v "$HOME/.config/rhasspy/profiles:/profiles" \
--device /dev/snd:/dev/snd \
synesthesiam/rhasspy-server:armhf \
--profile en \
--user-profiles /profiles
Same Error !!
My Log is as following
DEBUG:__main__:Namespace(host='0.0.0.0', port=12101, profile='en', set=[], ssl=None, system_profiles='/usr/share/rhasspy/profiles', user_profiles='/profiles'),
DEBUG:RhasspyCore:Loaded profile from /usr/share/rhasspy/profiles/en/profile.json,
DEBUG:RhasspyCore:Profile files will be written to /profiles/en,
DEBUG:root:Loading default profile settings from /usr/share/rhasspy/profiles/defaults.json,
DEBUG:WebSocketObserver: -> started,
DEBUG:DialogueManager: -> started,
DEBUG:DialogueManager:started -> loading,
DEBUG:DialogueManager:Loading actors,
DEBUG:DialogueManager:Actors created. Waiting for ['recorder', 'player', 'speech', 'wake', 'command', 'decoder', 'recognizer', 'handler', 'sentence_generator', 'speech_trainer', 'intent_trainer', 'word_pronouncer'] to start.,
DEBUG:APlayAudioPlayer: -> started,
DEBUG:EspeakSentenceSpeaker: -> started,
DEBUG:DummyWakeListener: -> started,
DEBUG:PyAudioRecorder: -> started,
DEBUG:WebrtcvadCommandListener: -> started,
DEBUG:PocketsphinxDecoder: -> started,
DEBUG:FsticuffsRecognizer: -> started,
DEBUG:PocketsphinxSpeechTrainer: -> started,
DEBUG:FsticuffsIntentTrainer: -> started,
DEBUG:PhonetisaurusPronounce: -> started,
DEBUG:DummyIntentHandler: -> started,
DEBUG:JsgfSentenceGenerator: -> started,
DEBUG:DialogueManager:player started,
DEBUG:EspeakSentenceSpeaker:started -> ready,
DEBUG:FsticuffsRecognizer:started -> loaded,
DEBUG:DialogueManager:wake started,
DEBUG:DialogueManager:recorder started,
DEBUG:DialogueManager:speech_trainer started,
DEBUG:DialogueManager:intent_trainer started,
DEBUG:DialogueManager:word_pronouncer started,
DEBUG:DialogueManager:handler started,
DEBUG:DialogueManager:sentence_generator started,
DEBUG:DialogueManager:speech started,
DEBUG:DialogueManager:recognizer started,
DEBUG:PocketsphinxDecoder:Loading decoder with hmm=/profiles/en/acoustic_model, dict=/profiles/en/dictionary.txt, lm=profiles/en/language_model.txt,
DEBUG:WebrtcvadCommandListener:started -> loaded,
DEBUG:DialogueManager:command started,
WARNING:PocketsphinxDecoder:preload: new_Decoder returned -1,
DEBUG:PocketsphinxDecoder:started -> loaded,
DEBUG:DialogueManager:decoder started,
INFO:DialogueManager:Actors loaded,
DEBUG:DialogueManager:loading -> ready,
INFO:DialogueManager:Automatically listening for wake word,
DEBUG:DialogueManager:ready -> asleep,
INFO:__main__:Started,
DEBUG:__main__:Starting web server at http://0.0.0.0:12101,
DEBUG:RhasspyCore:Using cached /profiles/en/download/cmusphinx-en-us-5.2.tar.gz for acoustic_model,
DEBUG:RhasspyCore:Using cached /profiles/en/download/en-g2p.tar.gz for base_dictionary.txt,
DEBUG:RhasspyCore:Using cached /profiles/en/download/en-g2p.tar.gz for g2p.fst,
DEBUG:RhasspyCore:Removing /profiles/en/acoustic_model,
DEBUG:RhasspyCore:Copying /tmp/tmp1ba09iq1/cmusphinx-en-us-5.2 to /profiles/en/acoustic_model,
ERROR:__main__:Compressed file ended before the end-of-stream marker was reached,
Traceback (most recent call last):,
File "/usr/local/lib/python3.6/dist-packages/flask_sockets.py", line 40, in __call__,
handler, values = adapter.match(),
File "/usr/local/lib/python3.6/dist-packages/werkzeug/routing.py", line 1786, in match,
raise NotFound(),
werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.,
,
During handling of the above exception, another exception occurred:,
,
Traceback (most recent call last):,
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1832, in full_dispatch_request,
rv = self.dispatch_request(),
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1818, in dispatch_request,
return self.view_functions[rule.endpoint](**req.view_args),
File "app.py", line 201, in api_download_profile,
core.download_profile(delete=delete),
File "/usr/share/rhasspy/rhasspy/core.py", line 513, in download_profile,
unpack(temp_dir),
File "/usr/share/rhasspy/rhasspy/core.py", line 499, in <lambda>,
unpack = lambda temp_dir: shutil.unpack_archive(src_path, temp_dir),
File "/usr/lib/python3.6/shutil.py", line 977, in unpack_archive,
func(filename, extract_dir, **kwargs),
File "/usr/lib/python3.6/shutil.py", line 915, in _unpack_tarfile,
tarobj.extractall(extract_dir),
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall,
numeric_owner=numeric_owner),
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract,
numeric_owner=numeric_owner),
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member,
self.makefile(tarinfo, targetpath),
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile,
copyfileobj(source, target, tarinfo.size, ReadError, bufsize),
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj,
buf = src.read(bufsize),
File "/usr/lib/python3.6/gzip.py", line 276, in read,
return self._buffer.read(size),
File "/usr/lib/python3.6/_compression.py", line 68, in readinto,
data = self.read(len(byte_view)),
File "/usr/lib/python3.6/gzip.py", line 482, in read,
raise EOFError("Compressed file ended before the ",
EOFError: Compressed file ended before the end-of-stream marker was reached,