This !! This !! This !! this makes me veeery enthuziastic !!! I hope there will be a possibility to tune it for a couple of voices - e.g.: entire family (man/woman/kid voices)
What sounds ideally from my point of view: To be able to record the voice with the wake-up word spoken by me, my wife, my kids , for multiple (e.g. 10âŠ20 ) times, and then to be able to do the magic that it will work at least as good as âok googleâ works
Is there something like a best practice around naming conventions you would recommend, also with the future in mind?
Example: if i have a main light in every room, HA itself doesnt allow naming them just âmain lightâ (duplicate names), with area to specify the location. So âSwitch on the living room main lightâ and âSwitch on the kitchen main lightâ would only work if the room is part of the light name itself, instead of relying just on the area setting.
This is the hope Iâve started my implementation, but havenât been able to test it with multiple people yet. David already has this implemented in openWakeWord as âcustom verifier modelsâ, and his tests show a significant improvement in accuracy.
You can have entities with the same name, just not the same entity id, so you should be able to have âmain lightâ in different areas.
Something missing from HA right now is using area context to disambiguate these âduplicateâ names. So saying âturn on main lightâ should always prefer to turn on the entity/device named âmain lightâ in the satelliteâs area. This is a minor change, and is on my to-do list.
Lack of functionality in the Android HA.app for wake word detection and as an always on wall panel display.
So far the box-3 hasnât quite lived up to expectations. Wake word isnât reliable enough. It often locks up on the last step, so i have to reboot it. And not once has it played the full response at the end, its always cut off at the beginning, or end, or both.
My hardware (x2):
- ESP32-S3-WROOM-1-N16R8
- INMP441 x2
- Jack output to standalone speakers with PCM5102a
I canât say I have much issues with wake word (porcupine1), the detection is good, works from about as far as the Echo 4; it will usually falsely trigger once (maybe twice) for the duration of a movie (I donât think thatâs too bad at this point, considering itâs sitting right next to the speaker). Iâm surprised this is currently #1 in the pollâŠ
Whisper is slow however, smaller models are fast, but unreliable (past the simple on/off command), Iâve settled on small-int8
(Intel NUC 6th gen i5 with HA OS), itâs better on reliability, but already much slower (~4sec, from post-speech to start of TTS).
It also seems to have a mind of its own; whenever it doesnât understand a word properly, if you repeat the same thing over and over, it will give you the same result, but if you say something else and then come back to it, it might just work⊠(little green men in the wires no doubt⊠)
And finally Piper, no problem here either. It lacks customization thatâs all. Personalizing all responses requires hijacking the text in on_tts_start
event; voice identification will be nice to help on that too. A simple append+prepend template in the config would be nice. But first, reliability and performance everywhere, then customizationâŠ
The poll doesnât mention ESPHome per say, but I thought Iâd mention the âmessyâ pipeline setup too. It needs a simpler declaration (like most other components) that just calls the proper functions at the proper time, without leaving the user to deal with those functions, and the accompanying state/error detection logic⊠There seems to be issues related to audio and sleep too in quite a few setups, similar to when a jack is plugged-in directly into HA (on the NUC); audio being cut off, or not playing at all, if the speaker isnât first âwoken upâ by a loud-enough sound or the jack unplugged/replugged. Iâve only experienced the second on my custom boards, but definitely the first one also, on the NUC itself.
Of course, my stuff all is on a breadboard for easy debugging, so I canât comment on the aesthetic (unless you like the spaghetti lookâŠ). I havenât looked into 3D printing an audio-optimized box yet.
There is a lack of âplug and playâ options for sure, and a lack of supported day-to-day non-device-related intents. Both of these are/will be a major factor against adoptionâŠ
Hi All,
First of all, it is really great to have the voice control in HA! But there are some thing I do not like:
First is the absence of audible feedback of wake word detection. If my Atom Echo is out of sight, or on a sunny day it is hard to understand if the wake word has been detected. It would be a really very important feature! Sending some triggered by the wake word detection response to a continuously working media player is not an option. That is because not everyone has one, and the confirmation would be detected by the satellite (Atom Echo in my case) as a command.
Second is the occasional stability of the voice system in general. Despite I am trying to speak clearly (in Hungarian), wake word is not always detected. Also, I get âI did not understand thatâ response too frequently. I have two satellites. The first one (installed as first) usually works much better than the second. The second one is frequently not working at all. After wake word detection (several trials) whatever I say, it will not understand. Than, it starts to understand, but flashes slowly 10-15 sec before the command is actually executed. So, the second, identical to the first satellite is very unstable despite using HA Cloud.
The third thing is about custom triggering sentences for automations. These are working, usually. But after detecting the command and starting the requested automation, the Atom Echo flashes fast 10-15 sec or more. Only after that the confirmation (âDoneâ) comes and the satellite returns to listening state.
In general, I like this voice assistant very much, but there is still a lot to improve. (In Hungarian it still does not understand anything except âturn onâ and âturn offâ)
It seems that I managed to solve the second problem I mentioned:
The second Atom Echo was put at a place, where it could detect voice not loudly enough. Due to that, there was no sharp, clean boundary between the speech and the silence. That is why, not detecting properly the end of the voice activity, it waited until the STT cycle timed out (~15 sec). My flat is a quiet place and I wanted my voice being detected more precisely. Therefore, I added some tweaked parameters to the configuration of the Atom Echo devices using the ESP Home extension. Those parameters are overriding ones downloaded from the GitHub project (Line #5).
I changed noise suppression level to 1 (quiet place) and the volume multiplier to 5 (stronger voice recorded). With these parameters the time-out is avoided and assistant acts and replies almost promptly.
My configuration:
substitutions:
name: m5stack-atom-echo-0f8d14
friendly_name: HĂĄlĂł asszisztens
packages:
m5stack.atom-echo-voice-assistant: github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main
esphome:
name: ${name}
name_add_mac_suffix: false
friendly_name: ${friendly_name}
api:
encryption:
key: <my-key-replaced>
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
voice_assistant:
noise_suppression_level: 1
auto_gain: 31dBFS
volume_multiplier: 5.0
vad_threshold: 3
Thank you very much. Vosk works great in Spanish. The small model is really fast in rpi4 and quite accurate, but the big model still works fine in rpi4 (1,5 s in STT) and is really accurate. And it just consume basically ram (like 4Gb RAM, which is ok in my 8Gb ram RPI) but not much CPU.
At least I can test the year of the voice.
An issue I have is trying to get the example from the release video to work.
I had posted a comment about it here but it isnât getting any traction so thought Iâd spam it here too⊠The compiler errors wanting esp-adf
instead of esp-idf
despite esp-adf
not being a valid option (and also fails if tried)
My code is straight from the example given on git⊠no idea what else to do.
EDIT: found out I had a couple of lines of code missing. All good now.
I think the lack of follow-up ability. When you start using LLM/GPT3/4, and it asks for more info or a followup, youâre out of luck⊠(I did hear someone say to just say the wake word and answer, but not sure that works).
Really need local processing for what HA can do and have a fallback option if HA doesnât know, then let another assistant pipeline work⊠Meaning, I want to control everything in HA locally, but also use it for conversational things.
Saying the wake word after follow up works.
What also works, " Is the hallway light on?" (answer is yes), (Wake Word) âTurn it off pleaseâ
If you donât mind custom components, there are solutions for this.
I made one for example:
As for the point of this thread, my second most used feature of Alexa isnât available out of the box, and thatâs a problem for me, and thatâs setting timers and reminders using voice.
I have implemented timers manually, but it was a hassle to set up, and having it built in would be nice.
My voice assistant doesnt recognize my daughter and wifeâs voice, only mine! It seems that female voice is a challenge for it.
Using the voice assistant on wearOS is very hit or miss. It rarely gets anything wrong but sometimes picks up background noise like a TV. Also, it does very poorly at recognizing when the phrase has ended and continues to listen way longer then needed.
Itâs still a very impressive feat and looking forward to it being more polished in the future!
Check our the voice reminders blueprint I have just posted
Anyone know how to add an audio acknowledgement on successful wake word detection?
Im not always able to look at the led to see if its listening.
for example (âyes sir?â) after triggering the wakeword.
Has anyone found a solution to send audio feedback to another player?
On esp32-s3-box3 the sound is very muffled and cuts off the first words.
My voice commands are often not understood, it works much better with the Alax. I hope this will be better in future, I wanât to get Alexa out of my house.
Is anyone else seeing that they can control most all of there devices except those using the Zigbee2MQTT Add-On?