Replacing Alexa or Google Home by Almond and Ada

Tags: #<Tag:0x00007f7396515db8> #<Tag:0x00007f7396515c50>

Oh, that sounds awesome!

What do you mean with

what you can say will be more limited

More limited than what?

More limited than what you can get out of a cloud speech service. This is the key trade-off between Rhasspy and Ada, which currently uses Microsoft’s cloud for speech recognition.

To be a little more precise, Rhasspy has 3 modes of operation for speech recognition (all completely offline):

  1. Closed
    • The default mode, where only the voice commands you specify can be recognized. This is what Rhasspy was designed for, and where it shines.
  2. Open
    • Recently added, this mode uses a general language model and ignores any custom voice commands. You can say anything, and Rhasspy will do its best to transcribe it. But you will probably find the performance to be poor compared to a cloud service.
  3. Mixed
    • An interesting combination of Open and Closed. Your custom voice commands are mixed into the general language model. You can say anything (like Open), but Rhasspy will be more likely to recognize your custom voice commands (like Closed). This mode is much slower than Closed, so a NUC or server should be used instead of a Pi.

It will be possible soon to use Rhasspy just for speech recognition, and have it forward sentences to HA’s conversation integration for intent recognition (using Almond, etc.).

2 Likes

One small clarification Ada can use any available STT integrations in Home Assistant. Currently the only one available is Microsoft Cloud.

1 Like

That’s true, my bad. Rhasspy will be one of the STT integrations in the future too.

2 Likes

For now, until the Rhasspy integration works with Almond, it seems like Ada in the cloud is the best option.

Thanks a lot, @synesthesiam for your clear explanation and your continued work on Rhasspy! :smiley:

@balloob, have you tried to get Ada + Almond to work on Hass.io and a Pi? I have seen Pascal’s video where he uses a Pi, but is that the released version?

Are there some instructions somewhere on how to connect speakers and a microphone to a Pi running Hass.io? Or is it just plug-and-play and the OS detects them by itself?

For me at least, with no speakers nor microphone connected now, I get this error when I start the Ada add-on, and I don’t know if that is expected? edit: this is expected.

1 Like

Very interesting. I’ve wanted to setup something like this for a while now.
I have to say I’m a bit conflicted on which platform to start with though.
Rhasspy sounds (from the small amount of reading I’ve done so far) like it will do what I want, but with the official backing of home Assistant will Ada be better supported?
Hmm so many options. Either way I look forward to seeing this area develop.

1 Like

I’m absolutely on board with using Almond and Ada. It seems there isn’t much in the way of documentation yet. I’ve posted about using a PS eye microphone as input without so much as a response.

Good day everyone. I am also very interested to build an Alexa like device for my home.
At some point I tried the build in Voice Commands but never got it to work.
Therefore I am very exicted to see the idea of Ada and Almond.

As Homeassistant seems to have a strategy towards simpliying things I am a little suprised about “server based” approach for Ada.
Leaving the technical bondaries aside, I feel it would be the best approach would be to include the interface into the homeassistants apps for iPhone and Android. In this case inbuild microphone and speaker can be used. In my case I have a dedicated HA tablet in my living room and I also have some spare androids phone, which I would like to convert into Alexa like devices.

What are your thoughts on this?

@murphys_law, I think it should work out of the box with the PS 3 Eye. I’ve also ordered one!

I haven’t been able to make it work. I get an error when starting the add-on:

@basnijholt
I haven’t tried Rhasspy yet (On the ever growing jobs list) but @jaburges has created a very simple to follow list of how to install Rhassby in a client/server setup here Rhasspy - Offline voice control step by step (Server/Client) - Docker

Got to be worth a go setting it up? You get the option of keeping it all 100% local.

To get an Echo Dot-style hardware experience, what are the best options? ReSpeaker and VOICEN turned up when I tried a few Google searches.

By Installing Almond, i seemed to have lost by existing ‘Conversation’ words. I have lot of conversation topic that I used for simple but effective voice based commands to control various lights and bulbs. After installing, none of my conversation words are recognized by Almond

Help me understand,

  1. How different the Rhasspy - Closed will be from the existing ‘Conversation’ module in HA?
  2. Rhasspy - Open - Will this be restrictive like Almond? i.e will the existing conversation or Rhasspy -Closed custom intents not work if this mode is active? If yes, then we are in the same loop as ‘Conversation’. I have lotts of custom intents which my family has got used to and for me, those are not to be replaced. I wanted a snips like experience where regardless of the snips NLP, my existing conversation intents co-existed in HA
  3. Rhasspy - Mixed. I think this is what I would be looking for. Hopefully this would retain the intents written for Rhasspy - Closed (will it support existing Conversation also?) and the Rhasspy - Open

So far not a good experience on Almond. Even reverting back to original conversation is not working yet even after the removal of Almond - somehow conversation still references the earlier installed Almond.

I think you will find that HA switched to Almond and it is the way forward.
If you installed the Almond addon for hassio you just installed a local copy of the server.

Hi @manju-rn, I’ll do my best to answer your questions.

The HA conversation module takes in text and recognizes/handles intents. If you write your Rhasspy voice commands to match what conversation expects, then you can use your existing HA configuration. Just configure Rhasspy to use HA conversation for intent recognition.

Once the HA intent integration goes live, Rhasspy will be able to trigger intents directly in HA, without needing conversation. This means you could port your conversation templates over to Rhasspy, but keep your intent_script configuration.

The existing Rhasspy custom intents will not work in Open mode, but your conversation intents should work just fine as long as Rhasspy can understand what you’re saying. I doubt the Open mode will work very well, but it’s worth a try since you don’t need to write any Rhasspy intents up front.

Since you have existing conversation templates, Mixed mode might allow you to gradually port intents over to Rhasspy. But I think using the Closed mode with Rhasspy configured for conversation would work better in the end.

thanks @synesthesiam Appreciate the detailed responses, I get an idea now. Let me install Rhasspy and see how it goes. will provide the feedback.

@danbutter Yes, I understand that Almond is the way forward. What I am concerned is that it is breaking the existing perfectly working solution (atleast to me). Moreover, I wouldn’t have been too bothered if I could go back seamlessly after uninstallation of Almond as I would expect from any other add-ons. Almond in this case is working like a ghost even though I have uninstalled and now I am breaking my head to see how to go back without having to port back to previous version or reinstall HA

I don’t see how you can go back to your old conversation mode without going back to an older version of HA. That is what I was getting at. From what I understand the old conversation has been deprecated and is now powered by almond.
Not great yet. I ask it to turn on the dining room light and it says I don’t understand.
Got down to “turn on dining room” and it says Ok I’ll turn the AC on cool, is that right?
Ummmmmm no.
So just waiting for it to mature a bit.

well I just restored my earlier snapshots and only then the Conversation element was free of Almond. This is unfortunate as I wanted to give Almond a try but not on the expense of sacrificing the existing conversation intents (which BTW work reliable and entire family have gotten used to the custom words done for each device / room in last year or so). Maybe I will setup another RPI with HA and experiment Almond and ADA,