Replacing Alexa or Google Home by Almond and Ada

Tags: #<Tag:0x00007f7c5eaaaf50> #<Tag:0x00007f7c5eaaae60>

Hi all,

I got very excited about the recently introduced Ada and Almond.

I consider privacy to be a fundamental human right and tell this to all of my friends. Having an Echo Dot Alexa doesn’t strengthen my arguments, haha.

Therefore, I am considering to completely getting rid of the Echos in my house, and to replace them by a device that I can use to do simple things, such as turning devices on or off and activating scripts/scenes.

As it stands now, it seems like Almond is already capable of doing this; however, this still requires me to type it, which is more work than just flipping a switch.

I want to use a Pi zero (or similar) with a good microphone to accomplish this.

Rasspy, seems to do everything I want, but I prefer a simpler solution that doesn’t require a lot of configuration. From Rasspy’s documentation, I found a list of recommended hardware, so that is probably a good place to start.

My idea for this topic is that we discuss how we can use Ada to listen to a wake word and then send over the STT to HA’s Almond integration.

So who else is interested in this and/or has already started tinkering?


Rhasspy author here. Just a heads up: I’m working on an HA integration that will allow Rhasspy to be an STT platform like Ada. You can still use Almond downstream, though what you can say will be more limited.


Oh, that sounds awesome!

What do you mean with

what you can say will be more limited

More limited than what?

More limited than what you can get out of a cloud speech service. This is the key trade-off between Rhasspy and Ada, which currently uses Microsoft’s cloud for speech recognition.

To be a little more precise, Rhasspy has 3 modes of operation for speech recognition (all completely offline):

  1. Closed
    • The default mode, where only the voice commands you specify can be recognized. This is what Rhasspy was designed for, and where it shines.
  2. Open
    • Recently added, this mode uses a general language model and ignores any custom voice commands. You can say anything, and Rhasspy will do its best to transcribe it. But you will probably find the performance to be poor compared to a cloud service.
  3. Mixed
    • An interesting combination of Open and Closed. Your custom voice commands are mixed into the general language model. You can say anything (like Open), but Rhasspy will be more likely to recognize your custom voice commands (like Closed). This mode is much slower than Closed, so a NUC or server should be used instead of a Pi.

It will be possible soon to use Rhasspy just for speech recognition, and have it forward sentences to HA’s conversation integration for intent recognition (using Almond, etc.).


One small clarification Ada can use any available STT integrations in Home Assistant. Currently the only one available is Microsoft Cloud.

1 Like

That’s true, my bad. Rhasspy will be one of the STT integrations in the future too.

1 Like

For now, until the Rhasspy integration works with Almond, it seems like Ada in the cloud is the best option.

Thanks a lot, @synesthesiam for your clear explanation and your continued work on Rhasspy! :smiley:

@balloob, have you tried to get Ada + Almond to work on and a Pi? I have seen Pascal’s video where he uses a Pi, but is that the released version?

Are there some instructions somewhere on how to connect speakers and a microphone to a Pi running Or is it just plug-and-play and the OS detects them by itself?

For me at least, with no speakers nor microphone connected now, I get this error when I start the Ada add-on, and I don’t know if that is expected? edit: this is expected.

1 Like

Very interesting. I’ve wanted to setup something like this for a while now.
I have to say I’m a bit conflicted on which platform to start with though.
Rhasspy sounds (from the small amount of reading I’ve done so far) like it will do what I want, but with the official backing of home Assistant will Ada be better supported?
Hmm so many options. Either way I look forward to seeing this area develop.

1 Like

I’m absolutely on board with using Almond and Ada. It seems there isn’t much in the way of documentation yet. I’ve posted about using a PS eye microphone as input without so much as a response.

Good day everyone. I am also very interested to build an Alexa like device for my home.
At some point I tried the build in Voice Commands but never got it to work.
Therefore I am very exicted to see the idea of Ada and Almond.

As Homeassistant seems to have a strategy towards simpliying things I am a little suprised about “server based” approach for Ada.
Leaving the technical bondaries aside, I feel it would be the best approach would be to include the interface into the homeassistants apps for iPhone and Android. In this case inbuild microphone and speaker can be used. In my case I have a dedicated HA tablet in my living room and I also have some spare androids phone, which I would like to convert into Alexa like devices.

What are your thoughts on this?