Rhasspy offline voice assistant toolkit

Hi ! Thanks, it finally worked, after some trying ! The final automation :

- id: '1592172281599'
  alias: '[OMBI]OmbiMovieRequest'
  trigger:
    event_data: {}
    platform: event
    event_type: rhasspy_OmbiMovieRequest
  action:
    service: ombi.submit_movie_request
    data_template:
      name: "{{ trigger.event.data.name }}"

I’m gonna try this synthax for Plex, if I found a workaround for the issue with it

Sorry, I forgot… I have choose Ombi to find the right synthax to use for Plex, but I don’t have a slot file for Ombi, only for Plex.

Plex can choose a movie to play from the “Films” slot file, but Ombi is a service who add movies to Plex.

So I have to set Rhasspy and Home Assistant to recognize a word who is not in the sentences/slots.

For exemple :

Ajoute le film ($){name}

Where ($) is the unknown variables.

Is it something possible to set ?

In the worst case, I think I could make a slot file with thousands of names of movies and TV Shows, but I’s gonna take for ever to Rhasspy to train with all this data.

There is a nice button to back-up your profile but I could not find anything to restore a profile which would be nice in case you have to reinstall Rhasspy (among others) for some reason.
Is it just me who wasn’t able to find how to restore the profiles? I could not even find something in this thread. There are also to many (almost 1200) replyies to this topic to read.

I use Rhasspy in combination with Home Assistant using itent_scripts. This works, but I am looking for a more elegant and generic solution to the following problem.
I want to know the departure times of the train in a certain direction. I have now temporarily solved that by creating 2 separate intent_scripts. For each direction one. But it seems to me that it should also be possible with an intent for multiple directions. I have tried the following but it doesn’t work (yet). What needs to be changed?

In Rhasspy this should be the intent:

[GetTrain]
What time does the train go to (Berlin | Hamburg) (place)

In Home Assistant I have:

intent_script:

  GetTrain: 
     action:
       service: persistent_notification.create
       data_template:
         message: >
            data_template: >-
          {% if is_state('Rhasspy.place') , 'Berlin' %}
            message: The train goes at {{states('sensor.Berlin')}} hours to state('Rhasspy.place) 
          {% else is_state('Rhasspy.place') , 'Hamburg' %}
            message: The train goes at {{states('sensor.Hamburg')}} hours to state('Rhasspy.place)
          {% endif %} 

Which will be turned in tts once I’ve finished testing.

You can just create a copy of the folder and place it back when restored.

Hi @nordeep

I am also trying to make my instance of hassio + rhasspy addon running in my raspberry. I have a respeaker mic array device that is not recognize by rhasspy. I am not totally sure about the steps you suggest regarding cloning the rhasspy hassio addon. How should I add the plugins package to the container? Would the steps something like:

Cheers

Hi @dreamy

@synesthesiam added necessary libraries to latest Rhasspy 2.4.x and you don`t need to change anything in the add-on.
All you need is configure the /etc/asound.conf. My config is next:

pcm.!default {
  type asym
  capture.pcm "cap"
  playback.pcm "speaker"
}
pcm.speaker {
  type plug
  slave {
    pcm "hw:0,0"
  }
}

ctl.!default {
    type hw
    card 0
}

pcm.array {
 type hw
 card VOICE
}
pcm.array_gain {
 type softvol
   slave {
   pcm "array"
   }
 control {
   name "PS3Eye Gain"
   count 2
   card 0
   }
 min_dB -40.0
 max_dB 15.0
 resolution 40
}
pcm.cap {
 type plug
 slave {
   pcm "array_gain"
   channels 4
   }
 route_policy sum
}

Hi,

Is there anyway to control the volume of the rhasspy by software instead of getting the volume level from alsamixer?
I wanted to have this independently because I have other sound components running on the same raspberry pi each with its own volume settings and I want to leave the alsamixer on the top volume.
But since I cannot control the volume level on rhasspy it is just too loud.

Thanks!

I am running Rhasspy 2.4.20.3 in a Docker setup, which is not running as well as it was bofore, running in a Pi3. I don´t know the reason. Before I dig it up, I would like to understand about the 2.5 release, that is now side by side with 2.4.x.
Since I could not find an instruction about this, I am asking here: Should I uninstall it (preserving the configuration flies) and then install the new one? I presume that this is what I should do, but before I do something wrong, I want to be sure to do it the right way.
Despite I am running Docker, my current Rhasspy install is based in the HASSIO addon.
Any explanation why it is now a addon side by side (2.4 and 2.5), and what the difference between them are welcome.
Thanks.

If you use the Addon, I would suggest to make a backup of the /share/rhasppy folder.
Then you can stop the Rhasspy Addon (2.4) and install the Rhasspy 2.5 AddOn

If that is installed, you can start is and it should be able to use all your settings.
You can switch between 2.4 and 2.5 just by starting and stopping.

The reason that there are two versions, is because the Rhasspy 2.5 was initially using a PRE-release version, to able to start testing with is in Home Assistant.
Now, 2.5 is released but the 2.4 addon is still available
2.4 is depricated, and will be removed in the future.

Thanks for your help. That’s what I am going to do.

bump any help?

How did you create those different volume settings?

I didn’t thats the thing.
I am running on the same rpi:

  • shairport-sync which uses the asound/alsamixer direct volume level but controllable on the physical volume button on iOS devices
  • mopidy which has a “wrapper” in which I can set a different volume level than the asound/alsamixer volume level (up to its maximum)
  • rhasspy which uses the asound/alsamixer direct volume level. It is here that I would like to have something similar to mopidy where I could set a lower volume than the one defined on the sound/alsamixer direct volume level.

Thanks!

Solved it creating a new virtual sound device with a different volume set.

1 Like

Is there anyway using the satellites /server MQTT combo to send via intent to Home Assistant the siteID of the call? In this way the response could be aware of the room it is in on a simple way.
It could work like “Turn the AC on” and if we omit the room the default on HA is the siteID of the call. If we wanted to turn on the AC from one room to the other we would just say “Turn the Living Room AC On”.
Thanks

It is better to do a feature request here:

Yep been talking over there also. I will post the feature there afterwards.
Right now I guess it probably is possible already. The intent payload already has the siteID I just don’t understand the relation or how to call it inside the intent_script.

Example:

{
  "input" : "turn livingroom AC on",
  "intent" : {
    "intentName" : "AC_Control",
    "confidenceScore" : 1.0
  },
  "siteId" : "livingroom",
  "id" : null,
  "slots" : [ {
    "entity" : "room",
    "value" : {
      "kind" : "Unknown",
      "value" : "livingroom"
    },
    "slotName" : "room",
    "rawValue" : "living room",
    "confidence" : 1.0,
    "range" : {
      "start" : 5,
      "end" : 15,
      "rawStart" : 5,
      "rawEnd" : 16
    }
  }, {
    "entity" : "state",
    "value" : {
      "kind" : "Unknown",
      "value" : "on"
    },
    "slotName" : "state",
    "rawValue" : "on",
    "confidence" : 1.0,
    "range" : {
      "start" : 19,
      "end" : 21,
      "rawStart" : 20,
      "rawEnd" : 22
    }
  } ],

On the HA side I have:

intent_script:
  AC_Control:
    speech:
      text: The {{ room }} AC was turned {{ state }}
    action:
      service_template: climate.turn_{{ state }}
      data_template:
        entity_id: "climate.daikin{{ room }}"

It should be possible somehow instead of getting only the room and state variables from the payload also getting for example the siteId and it would be also nice to get the room variable rawValue to put it in TTS .

Maybe it is but idk
Thanks!

Hello folks :slight_smile: I read this thread and try to get info about voice commands here and rhasspy and ada/almond forums. My need is little bit different from what you try to solve here (I think).

My native language is Czech which is not supported by Rhasspy/Almond etc. Google translate works flawless but is unusable in this way. Google promises over 6 years to start support Czech in their assistant but still nothing.

So in fact I don´t need full STT recognition. I only need to “record” or write a bunch of commands in Czech. Those commands would run some logic (to turn on/off, to run automation etc.)

HASS could return TTS (via Google home as speaker) or if it is not possible in Czech then could return pre-recorded mp3.

Do you think rhasspy is the way for my needs? I ask because this project is quite bigger for test if I even don´t know if it will fit those basic needs.

Thank you for help.

Hi @Jiran, would you be interested in working together to add Czech support to Rhasspy? Forget Google :slight_smile: