Almond only supports English at the moment. Other languages are possible but are not currently on their roadmap.
About circular buffers, we’re looking for help in improving Ada. The source is here
Almond only supports English at the moment. Other languages are possible but are not currently on their roadmap.
About circular buffers, we’re looking for help in improving Ada. The source is here
In Addition to Ada, have you considered to look at Common Voice (a Mozilla project)? This would be a bit more privacy focused voice database than Azure Cognitive services. I totally agree with your current approach to get it up and running, but for the long term, this might be a way to go…
I’m not sure if you can ‘sideload’ different firmware on to Echo devices, I imagine making a custom skill would be the way forward here.
hi, i tried to set up the custom component for telegram, but it doesnt work, anyone try this?
What about Snips ? does HA & Snips stop colaborating ? Snips was looking more advanced with more languages.
Will be nice to ear from both sides.
Just find out Snips was bought by Sonos a week after the state of the Union. funny timing, …
Snips community in shock …
I am a Mycroft advocate myself. Open sourced, privacy focused, multi language, mature community, easy skill development.
Snips Voice Assistant is available already. Why reinvent the wheel?
Also snips is not open source.
How does this work when HASS is in its own container and Almond is in another? From what I can tell, the Almond container requires an Origin
header of http://127.0.0.1:3000
to authenticate, but if I attempt to force that from the HASS container, I get a 403
CORS response.
EDIT: I can confirm that right now, both containers need to have host
networking enabled. The reason: the local Almond server is set up to only allow API calls from the same host; if the Docker container doesn’t share the same networking stack as the HASS container, any request from HASS to Almond will get a 403
back. Seems a bit brittle, but I’m assuming it was the speediest way to get up and running.
Now all we need is ESPHome modules for microphones (ideally with local “wake word” support) and speakers!
I’m not sure what hardware you would use here, does the ESP32 have an ADC for a Mic?
Exciting developments, very interested to see how this goes. Fully agree with the need for privacy.
I think Mycroft would be a great addition as well.
Almond seems to misunderstood quite often. For example, I have switches called Humidifier and Coffee Machine in HA. I DO see these in Almond’s My Skills, with correct names. But when I actually issue a “turn on Humidifier” or Coffee Machine command, it always goes:
which is obviously wrong. There are no errors or warnings in Almond’s log. Any tips?
Love where this is heading.
It would be great if Almond could use light groups to create automations though. Currently it can only use physical lights, which is a pain when I want to say turn all interior lights off when I leave home, which I have in a light group called light.interior.
For now, I will still be primarily needing to skip Almond.
Is there a way to set the automation editor to manual and not have Almond pop up first?
This sounds promising to me. I updated my HA from 0.92 to 0.102 because of this.
From my understanding, the Almond server that I have installed locally will not work when there is no internet, right? Because it needs to communicate with LUInet?
im trying to do the same. did you have any success?
I created a folder called “telegram_bot_conversation” in custom_components folder and added the 2 files into it; init.py and manifest.json
But what is the next step? Do I need to add anything into the configuration file?
If you have the telegram bot configured, that will be all, but it’s not working for me