Almond & Ada: privacy-focused voice assistant

Try that: Rhasspy offline voice assistant toolkit

2 Likes

Thatā€™s not very strange. The moment you are forcing people to pay monthly fee to use services that should be free and open source like everything in home assistant then itā€™s bound to fail. Iā€™m afraid we will see more and more stuff being added behind the pay wall called nabu casa. Itā€™s a shame that the founders first tell people fully local and open source. Then start a hosting service and slowly put everything behind a paywall. I keep downloading the latest versions of ha and preserving it for the day that home assistant is sold to a big fish and everything disappears behind a paywall. At least I can keep using it for freeā€¦ In the end itā€™s always about the money

and what exactly are you referring to?

1 Like

Looks like a recent change was performed for Almond.

Note: since version 2.0.0 of the add-on, the use of the separate Ada add-on is not required. Almond includes built-in voice capabilities, using the wake-word ā€œcomputerā€. It is recommended to avoid using Ada with Almond >= 2.0.0.

https://github.com/home-assistant/addons/tree/master/almond

1 Like

Does anyone has any documentation or manual, how to pass the text to almond via e.g. node-red?
I am using Rhasspy, but its major limitation is that you have to train the exact sentenses and then use them.
If there is a way to pass raw text to almond, it might be very ā€œsmartā€ voice assistant.

2 Likes

I also tried rhasspy, but from the little time I was able to spend on it it looked like you needed an automation for each and every device that you want to control.
Please correct me if Iā€™m wrong about this.
Anyway I canā€™t spend that much time making automations.
So Iā€™ll try out almond 2.0 and see how it goes.

You dont need to make automation for every device or automation, but it is very time consuming to make everything functional. So my idea was to use Rhasspy (which has incredible accurate and fast speech recognition) and to transfer the recongnized text to Almond. But I dont how to create the last step. Currently I am able to get text to Homeassistant and back to Rhasspy to be spoken. But the transfer from HA to Almond is missing.

1 Like

I was having the same idea.
Maybe text2json is more for us:

If I read correctly I dont see how it solves our problem. It is another ā€œRhasspyā€ like system (so you have to write all commands to be able to use)

Ah @cicinovec I thought itā€™d be possible to capture the spoken word as text, without having to specify intent.

Next best workaround would be to export devices via configurable script and load it in rhasppy.

Hello @SamJongenelen, Rhasspy can already do this. You can use Rhasspy to capture the sound, convert speech to text and the send the text to Home Assistant.
Next step will be to provide this text to assistant (I thought about Almond) who will transcript into action.
Last step is respond with answer, that can be sent back to Rhasspy for speech action

First and last step can be done, I am stuck with step 2 as Almond does not have API or Node red block

Yeah, the link i posted is a small subset of rhasspy.

we need a link to almond for sure

@SamJongenelen Do you if it is possible to put this thing onto like an ESP32 for having only small listening devices in your home? For examle thereā€™s an ESP32 in the kitchen which is running LED strips, why not just saying ā€œHey Home Assistant, turn on the kitchen lightā€?

Almond has become Genie. There is a recent blog post.

2 Likes

Well thatā€™s a satellite. But Iā€™ve never had it running fully automated

I was wondering if you managed to integrate Almod/Genie in-between Rhasspy and Home Assistant ?

Hi, no, I didnt. It is at the moment too complex for my knowledge

You shouldnā€™t need it in between.
I just setup the addon on my Home Assistant, setup a RPI 4 with a conference USB microphone, connected the client to the Genie server and started issuing voice commands.
No extra setup needed as it pulled all my HA devices in.

It would be interesting to see if Ada can be replaced with Whisper from OpenAI, since that offers text to speech from ā€œanyā€ native language and translate it into English.

Some people already took a stab at it and created a Docker container that exposes the speech recognition service through HTTP endpoints.

2 Likes

So it looked like there was a lot of talk about voice control but now that hey ada is archived how am I supposed to use voice control? I just bought the microphone and hey ada was just not working and I uninstalled it and now canā€™t reinstall it because itā€™s been archived. How do I cut Google home out and use a USB microphone plugging into my odroid n2+?