Replacing Alexa or Google Home by Almond and Ada

Very interesting. I’ve wanted to setup something like this for a while now.
I have to say I’m a bit conflicted on which platform to start with though.
Rhasspy sounds (from the small amount of reading I’ve done so far) like it will do what I want, but with the official backing of home Assistant will Ada be better supported?
Hmm so many options. Either way I look forward to seeing this area develop.

1 Like

I’m absolutely on board with using Almond and Ada. It seems there isn’t much in the way of documentation yet. I’ve posted about using a PS eye microphone as input without so much as a response.

Good day everyone. I am also very interested to build an Alexa like device for my home.
At some point I tried the build in Voice Commands but never got it to work.
Therefore I am very exicted to see the idea of Ada and Almond.

As Homeassistant seems to have a strategy towards simpliying things I am a little suprised about “server based” approach for Ada.
Leaving the technical bondaries aside, I feel it would be the best approach would be to include the interface into the homeassistants apps for iPhone and Android. In this case inbuild microphone and speaker can be used. In my case I have a dedicated HA tablet in my living room and I also have some spare androids phone, which I would like to convert into Alexa like devices.

What are your thoughts on this?

@murphys_law, I think it should work out of the box with the PS 3 Eye. I’ve also ordered one!

I haven’t been able to make it work. I get an error when starting the add-on:

@basnijholt
I haven’t tried Rhasspy yet (On the ever growing jobs list) but @jaburges has created a very simple to follow list of how to install Rhassby in a client/server setup here Rhasspy - Offline voice control step by step (Server/Client) - Docker

Got to be worth a go setting it up? You get the option of keeping it all 100% local.

To get an Echo Dot-style hardware experience, what are the best options? ReSpeaker and VOICEN turned up when I tried a few Google searches.

By Installing Almond, i seemed to have lost by existing ‘Conversation’ words. I have lot of conversation topic that I used for simple but effective voice based commands to control various lights and bulbs. After installing, none of my conversation words are recognized by Almond

Help me understand,

  1. How different the Rhasspy - Closed will be from the existing ‘Conversation’ module in HA?
  2. Rhasspy - Open - Will this be restrictive like Almond? i.e will the existing conversation or Rhasspy -Closed custom intents not work if this mode is active? If yes, then we are in the same loop as ‘Conversation’. I have lotts of custom intents which my family has got used to and for me, those are not to be replaced. I wanted a snips like experience where regardless of the snips NLP, my existing conversation intents co-existed in HA
  3. Rhasspy - Mixed. I think this is what I would be looking for. Hopefully this would retain the intents written for Rhasspy - Closed (will it support existing Conversation also?) and the Rhasspy - Open

So far not a good experience on Almond. Even reverting back to original conversation is not working yet even after the removal of Almond - somehow conversation still references the earlier installed Almond.

I think you will find that HA switched to Almond and it is the way forward.
If you installed the Almond addon for hassio you just installed a local copy of the server.

Hi @manju-rn, I’ll do my best to answer your questions.

The HA conversation module takes in text and recognizes/handles intents. If you write your Rhasspy voice commands to match what conversation expects, then you can use your existing HA configuration. Just configure Rhasspy to use HA conversation for intent recognition.

Once the HA intent integration goes live, Rhasspy will be able to trigger intents directly in HA, without needing conversation. This means you could port your conversation templates over to Rhasspy, but keep your intent_script configuration.

The existing Rhasspy custom intents will not work in Open mode, but your conversation intents should work just fine as long as Rhasspy can understand what you’re saying. I doubt the Open mode will work very well, but it’s worth a try since you don’t need to write any Rhasspy intents up front.

Since you have existing conversation templates, Mixed mode might allow you to gradually port intents over to Rhasspy. But I think using the Closed mode with Rhasspy configured for conversation would work better in the end.

thanks @synesthesiam Appreciate the detailed responses, I get an idea now. Let me install Rhasspy and see how it goes. will provide the feedback.

@danbutter Yes, I understand that Almond is the way forward. What I am concerned is that it is breaking the existing perfectly working solution (atleast to me). Moreover, I wouldn’t have been too bothered if I could go back seamlessly after uninstallation of Almond as I would expect from any other add-ons. Almond in this case is working like a ghost even though I have uninstalled and now I am breaking my head to see how to go back without having to port back to previous version or reinstall HA

I don’t see how you can go back to your old conversation mode without going back to an older version of HA. That is what I was getting at. From what I understand the old conversation has been deprecated and is now powered by almond.
Not great yet. I ask it to turn on the dining room light and it says I don’t understand.
Got down to “turn on dining room” and it says Ok I’ll turn the AC on cool, is that right?
Ummmmmm no.
So just waiting for it to mature a bit.

well I just restored my earlier snapshots and only then the Conversation element was free of Almond. This is unfortunate as I wanted to give Almond a try but not on the expense of sacrificing the existing conversation intents (which BTW work reliable and entire family have gotten used to the custom words done for each device / room in last year or so). Maybe I will setup another RPI with HA and experiment Almond and ADA,

@synesthesiam, I finally got to try Rhasspy, and I must say, it is amazing!

In two hours I have been able to set it up and it seems to be able to do everything I want to do.

I do have one problem though, when in the Speech tab clicking on Hold to Record, my intents are almost always recognized. However, when I use the Porcupine wake word, it is far less accurate. Does it somehow use a different STT engine then? Or is there some other difference?

1 Like

Hi @basnijholt, glad to hear you’ve had an (overall) positive experience with Rhasspy :slight_smile:

It goes through the same STT engine, so this almost certainly indicates a problem with the microphone configuration. That’s good news, since it should work great once you get it fixed! If you could post to the Rhasspy community site or open a Github issue with the details of your set up, I’d be happy to help.

Most commonly, switching between PyAudio and arecord for microphone input helps. In other cases, you’ll need to adjust your microphone volume or edit your asound.conf file.

Anybody got Ada working? I am getting this error
https://paste.ubuntu.com/p/9qNxs8YFpJ/

I couldn’t get it to work. This is my error log, similar, but not the exact same error.

Would it be possible to set up a synchronised set Ada and Almond servers? I’m thinking about having a “main” server installed next to my HomeAssistant on my home server, while maintaining a “slave” server hosted next to my IPv4 to IPv6 proxy (CGNAT, but have IPv6, so for external access I have an Azure VM running that forwards IPv4 connections to v6).

1 Like