I’ve installed rhasspy with the add on method in home assistant, but i can’t open the webui. This is my log, wha’ts wrong?
DEBUG:main:Namespace(host=‘0.0.0.0’, log_level=‘DEBUG’, port=12101, profile=‘it’, set=, ssl=None, system_profiles=‘/usr/share/rhasspy/profiles’, user_profiles=‘/share/rhasspy/profiles’)
DEBUG:RhasspyCore:Loaded profile from /usr/share/rhasspy/profiles/it/profile.json
DEBUG:RhasspyCore:Profile files will be written to /share/rhasspy/profiles/it
DEBUG:root:Loading default profile settings from /usr/share/rhasspy/profiles/defaults.json
DEBUG:WebSocketObserver: → started
DEBUG:DialogueManager: → started
DEBUG:DialogueManager:started → loading
DEBUG:DialogueManager:Loading actors
DEBUG:DialogueManager:Actors created. Waiting for [‘recorder’, ‘player’, ‘speech’, ‘wake’, ‘command’, ‘decoder’, ‘recognizer’, ‘handler’, ‘speech_trainer’, ‘intent_trainer’, ‘word_pronouncer’] to start.
DEBUG:PyAudioRecorder: → started
DEBUG:APlayAudioPlayer: → started
DEBUG:DummyWakeListener: → started
DEBUG:EspeakSentenceSpeaker: → started
DEBUG:WebrtcvadCommandListener: → started
DEBUG:PocketsphinxDecoder: → started
DEBUG:FsticuffsRecognizer: → started
DEBUG:DummyIntentHandler: → started
DEBUG:PocketsphinxSpeechTrainer: → started
DEBUG:FsticuffsIntentTrainer: → started
DEBUG:PhonetisaurusPronounce: → started
DEBUG:DialogueManager:recorder started
DEBUG:EspeakSentenceSpeaker:started → ready
DEBUG:WebrtcvadCommandListener:started → loaded
WARNING:FsticuffsRecognizer:preload: [Errno 2] No such file or directory: ‘profiles/it/intent.json’
DEBUG:DialogueManager:player started
DEBUG:FsticuffsRecognizer:started → loaded
DEBUG:DialogueManager:wake started
DEBUG:DialogueManager:handler started
DEBUG:DialogueManager:speech_trainer started
DEBUG:DialogueManager:intent_trainer started
DEBUG:DialogueManager:word_pronouncer started
DEBUG:DialogueManager:command started
DEBUG:PocketsphinxDecoder:Loading decoder with hmm=profiles/it/acoustic_model, dict=profiles/it/dictionary.txt, lm=profiles/it/language_model.txt
DEBUG:DialogueManager:speech started
DEBUG:DialogueManager:recognizer started
WARNING:PocketsphinxDecoder:preload: new_Decoder returned -1
DEBUG:PocketsphinxDecoder:started → loaded
DEBUG:DialogueManager:decoder started
DEBUG:DialogueManager:Actors loaded
DEBUG:DialogueManager:loading → ready
INFO:DialogueManager:Automatically listening for wake word
DEBUG:DialogueManager:ready → asleep
DEBUG:InboxActor: → stopped
INFO:main:Started
DEBUG:main:Starting web server at http://0.0.0.0:12101
Running on https://0.0.0.0:12101 (CTRL + C to quit)
[2020-02-29 15:11:43,103] ASGI Framework Lifespan error, continuing without Lifespan support
WARNING:quart.serving:ASGI Framework Lifespan error, continuing without Lifespan support
Caution here, since there is NO login required, you are opening up your systems to the whole world.
I strongly suggest using Rhasspy webui on your local network ONLY
Well no, i can’t access my server with ip address… I read somewhere that if i use duckdns, the server isn’t reacheble via local ip address. Am i wrong?
I think the term you are looking for here is NAT loopback. If your router supports this you can use your duckdns address from your local network.
For example I have the official android app setup with my duckdns address and it doesn’t matter if I am home on my wifi or on the cell network. I connect the same either way.
Fuck me, you’re right
I was using the button “open webui” from within the rhasspy addon page, and if i not forwarding the port it do not open. I’ve tried now with the ip address and it works without the port forwarding…
For the hass.io instance i never checked using https instead of http with the local address… my fault
Something broke after upgrade to latest hass.io 106.x and latest hassio-audio #9 and Rhasspy-addon 2.4.19
First: Porcupine module didn’t load:
ERROR:PorcupineWakeListener:loading wake handle
Traceback (most recent call last):
File “/usr/share/rhasspy/rhasspy/wake.py”, line 852, in in_started
self.load_handle()
File “/usr/share/rhasspy/rhasspy/wake.py”, line 936, in load_handle
sensitivities=self.sensitivities,
File “/usr/share/rhasspy/porcupine.py”, line 117, in init
raise self._PICOVOICE_STATUS_TO_EXCEPTION[status](‘initialization failed’)
Second: ALSA cards not available from container:
aplay: main:788: audio open error: No such file or directory
ERROR:APlayAudioPlayer:on_receive
Traceback (most recent call last):
File “/usr/share/rhasspy/rhasspy/actor.py”, line 175, in on_receive
self._state_method(message, sender)
File “/usr/share/rhasspy/rhasspy/audio_player.py”, line 67, in in_started
self.play_file(message.wav_path)
File “/usr/share/rhasspy/rhasspy/audio_player.py”, line 90, in play_file
subprocess.run(aplay_cmd, check=True)
File “/usr/lib/python3.6/subprocess.py”, line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘aplay’, ‘-q’, ‘-D’, ‘’, ‘/share/rhasspy/profiles/ru/beep_hi.wav’]’ returned non-zero exit status 1.
root@75f2ff60-rhasspy:/# ls -al /proc/asound/
total 0
drwxrwxrwt 2 root root 40 Mar 12 15:45 .
dr-xr-xr-x 273 root root 0 Mar 12 15:45 …
root@75f2ff60-rhasspy:/#
I had similar issues.
Porcupine appears to have updated their ppn files.
I redownloaded my “jarvis.ppn” and now it is working normally.
Try redownloading whatever porcupine file you were using - likely it was updated (mine was about 2 months ago).
The second issue I think I had as well… I rebuilt everything in my profile and then my sound device wasn’t saving to profile.json. I updated it manually and it started working.
Looks like the profile for sound device is not saving correctly.
I haven’t created an issue on github yet, but think both points are valid.
Thank you for help!
Trying to use redownloaded porcupine “.ppn” files - the same issue. In my case - bumblebee_linux.ppn and jarvis_linux.ppn
The second issue - before upgrade I could see name of soundcard devices on Rhasspy-addon page. Now I can see - only “Built-in Audio Analog Stereo”. As far as i could understand - hassio-audio addon creates this device.
I’m running Rhasspy in Docker on a Raspberry Pi4, and getting it to communicate with Home Assistant in HassOS/Hassio on a separate Raspberry Pi3 device.
I’m trying to move a number of existing Home Assistant intent scripts that have been set up for Google Assistant via the DialogFlow integration across to Rhasspy. Accordingly, I have set up Rhasspy using Kaldi in open transcription mode to then send intents (rather than events) to Home Assistant.
However, to get this working fully I need to replicate a feature of DialogFlow which allows the sending of slots not explicitly named/referred to in the intent itself. To do this I need to parse the original transcribed speech.
I see from the Home Assistant Developer docs on intents that this does exist in the form of an input-text property of the homeassistant.helpers.intent.Intent class, but I can’t for the life of me work out how to access this property in scripts and automations on the Home Assistant side.
Am I missing something really obvious here? Thanks for any advice
I’m trying to setup Rhasspy as a hassio addon with a PS3 camera. It appears in the dropdown list in the addon settings but I can’t see it in the Rhasspy WebUI.
The following error shows up in the logs:
ERROR:DialogueManager:get_microphones
Traceback (most recent call last):
File "/usr/share/rhasspy/rhasspy/dialogue.py", line 782, in handle_forward
mics = recorder_class.get_microphones()
File "/usr/share/rhasspy/rhasspy/audio_recorder.py", line 261, in get_microphones
default_name = audio.get_default_input_device_info().get("name")
File "/usr/local/lib/python3.6/dist-packages/pyaudio.py", line 949, in get_default_input_device_info
device_index = pa.get_default_input_device()
OSError: No Default Input Device Available
Is this a config error or related to the audio problems I’m seeing around the forums since the latest updates?
Seems we need to install libasound2-plugins inside Rhasspy docker container. Thereafter it’s possible to set “pulse: PulseAudio Sound Server” as the Output Device and Output works as before!