It would also be awesome if we could use camera audio feeds already existing in home assistant.
This is awesome!!
One question: does it work with many satellites with the same wake word? in that case, how does it works? which one responds? With Alexa, it detects the nearest Alexa and knowing in which room it is, you can activate the lights without saying which light (eg: Alexa, turn on the lights, and it turns on the lights of the room where the speaker is).
Some tutorials to make a DIY device with esp would be great too.
The satellite that detects the wake word first will be the one that responds. We will use the device id in the near future to know which room the satellite is in so we can turn on the right lights
Stay tuned
Happy to download and install python and jupyter notebook/anaconda, but anyone know how to get the liked google collab notebook to run on my AMD GPU rather than CPU for faster training?
Amazing work, thank you guys
Question, I saw that you will be working on timers which is amazing news. Are you also planning to work on an “announce” feature similar to Alexa to send a message to all smart speakers in the house?
Will this do to get openwakeword as a container? (docker-compose.yaml)
version: "3"
services:
openwakeword:
container_name: openwakeword
image: rhasspy/wyoming-openwakeword:latest
command: [ "--preload-model", "ok_nabu" ]
volumes:
- "./openwakeword-data/custom-models:/custom"
environment:
- "TZ=Europe/Amsterdam"
restart: unless-stopped
ports:
- "10400:10400"
removed --model
because it’s depricated
Looks like CloudFlare has nerfed the email address in the “Voice office hours for scientists” section.
Also, it’d be nice if there could be some kind of fallback / passthrough option available. We’ve got a bunch of Google Home units at our place, and we pay for YouTube Music, so it’d be nice to have basic stuff fulfilled locally (e.g. starting timers, asking about the weather etc.) with any requests that can’t be fulfilled by Home Assistant being sent on to the Google Assistant SDK, or wherever you want, to be fulfilled and played back on the speaker(s) of your choosing.
Fantastic announcement - congrats and thanks to all involved
This is a great project. I am so happy that we have wake words now and a couple different ways to caputure the wake words.
Quick question, I was training my own wake word using the notebook. I followed the instructions and it was running thru and then at the end where it looks like it would download the files I need, it is throwing errors. The last line is:FileNotFoundError: [Errno 2] No such file or directory: 'my_custom_model/Carson.onnx'
There is more error code above that as well. Thoughts?
Nice!
I don’t think it’s quite in the state where I can transition my existing voice control to Assist, but it’s getting close. The missing parts for me right now are:
- non-English custom wake word (Russian in my case, I’m currently using Snowboy, so adaptation of that to Wyoming would work for me as well)
- non-voice wake mechanisms, e. g. a service call or a button (might be there already and I’m not seeing it?) — I have a few scenarios where I switch voice waking off (“silence for X minutes” or when a media player is playing in the room) and a hardware override of some sort (physical button?) would be useful
As for the rest, that I think I can adapt myself already.
I was able to install the M5Stack Atom Echo Smart Speaker and it seems to be working however I see I have an update available within esphome however when I try to upgrade I get the following error that I can’t get past. Any ideas what I need to do?
**edit - spoke too soon it seems to falling offline not staying connected
I didn’t see it mentioned but I figured out it needs at least the beta version of esphome 2023.10.b. Unfortunately it is still falling offline and I see this error in the logs → WARNING m5stack-atom-echo-8a1a74.local: Connection error occurred: [Errno 104] Connection reset by peer
Any thoughts on why this might be happening?
This is fantastic! I’m excited we’re as close as we are to getting a fully integrated local voice assistant! The one question I had that I haven’t seen mentioned yet is the ability to replicate the Alexa interactive notifications where the system can prompt and ask questions and the answers can kick off branches of an automation.
Is that something we can already do in the existing architecture or is that something that will have to be developed in the future? I’ve had some trouble following all the voice developments so if I missed it, I apologize in advance!
Thanks again for a great series of developments!
This is something I’ve wanted to do for a long time. I’d love to have my voice assistant ask a question when my kitchen motion sensor detects me in the morning and then have it wait for my yes/no response so it can complete an automation.
I got openwakeword running in a docker container, and used the example configuration for my ESP32-S3-Box to test this out, and it’s working great! That was really easy overall.
Not perfect of course, but it’s so much better than it used to be already. Can’t wait for more! Great work!
I guess I must be missing something here. I already had the Whisper and Piper Add-ons installed so have just installed the OpenWakeWord one but under the Voice Assistant settings for HA Assist, it doesnt show up:
Hi, I watched the video overview of this feature and have a question. Is it technical possible to add the wake word functionality to the Assist app so, if given the appropriate permissions to “listen” via the phones microphone, you’re smartphone can listen for the wake word?
Also curious if anyone has tried to “flash” an existing smart speaker to run this feature?
Thank you!
That was the issue, thanks.
Next thing, the list of wake words available in the list from openwakeword is only small (5 options), not the full batch shown in the announcement video. I thought it was supposed to come with the full list?