Would also like to know how to redirect the TTS response to another media player when using the pi as the wyoming satellite… does anyone know if this is currently possible or is on the roadmap?
My main concern with using something like the raspberry pi as a satellite is the increased maintenance with updates for each unit.
One could set automatic updates, but this raises its own issues. Having something like esphome behaviour for the pi’s would be amazing “x needs updates” etc.
If being used only as a satellite, there are only 3 components: RasPi Operating system, wyoming-satellite.service, and optionally wyoming-openwakeword.service. Once setup (and working as intended) I anticipate very few updates that affect its operation.
I have been using 3 RasPi satellites with Rhasspy (predecessor to HA Voice Assist) for over 1 year with no maintenance needed.
ESPHome, on the other hand, has so much more functionality that there are several updates per month
FutreProofHomes on YouTube just released a video on setting up audio playback on the same Raspberry Pi you installed the Wyoming Satellite stuff on, whereas I don’t know of an easy way to do both on an ESPHome device.
This is exactly what I’m trying to figure out. I have built one Wyoming Satellite and tied it into ChatGPT pipeline and installed open wake word on it. It’s doing everything I would like. Now I’m trying to figure out how to put the response of the satellite to Sonos speakers in the room that the satellite is in. Any recommendations are welcome. Really looking forward to removing all of my Alexa devices.
I’m looking for the same solution. I have Sonos speakers in every room of the house and I would like to put the responses of the Wyoming satellite on those speakers. Replacing Amazon Echos throughout the house is the goal.
Hi has voice evolved enough yet to be able to request music to be played either from a local folder/ usb stick or from a synology nas or music stored on the Pi4 running HA,
I want to come away from Alexa or Google, we have tons of our CDs ripped to MP3, would be nice to be able to call that somehow within HA. sick to death of bing unable to easly play our own music,we have plex and the plex intergration with Alex but flakey at best
Not sure if it will work for Sonos, but working good for Google Hub. Try.
Enhancing Voice Assistant: Integrate an External Speaker using ESPHome (youtube.com)
I have a problem with my M5 Atom Echo and custom wake word. It’s saying " I don’t understand what to do" when I am not asking for anything. Looks like it is picking up words from TV or our conversation with other people. Any idea what to do with this issue? I am using Hey, Morgan wake words.
Thank you
Did you get this working? I have 2 wyoming satellites working as proof of concepts (one pi zero and one m5 atom)… but, like you, I need to redirect the response audio to the sonos speakers in the room. I plan on building more once one of the concepts work. It seems the M5 Atom may be the easier route, I just haven’t been able to get it to redirect the response audio yet.
Not yet - I have seen several folks request this functionality so I have been waiting to see if it is in a future release. Also, to be honest, I switched my focus on dashboards to get what I do have running operational and then grow from there. Plus the dashboard becomes the starting point for voice commands once I can get output to the Sonos speakers.
Voice Assistant - PIPER - strange output. After updating my HA Supervised to
Core 2024.3.3 | Supervisor 2024.03.1 | Operating System 12.1 | Frontend 20240307.0 | Installed Add-ons / Studio Code Server (5.15.0), Home Assistant Google Drive Backup (0.112.1), InfluxDB (5.0.0), Grafana (9.2.1), Terminal & SSH (9.10.0), File editor (5.8.0), Samba share (12.3.1), Whisper (2.0.0), Piper (1.5.0), openWakeWord (1.10.0), ESPHome (beta) (2024.3.1), Matter Server (5.5.1).
Piper all of a sudden started to take speech trough one of my Atom Echo 1 satellites. I have no idea what happened.
DEBUG:wyoming_piper.handler:Sent info
DEBUG:wyoming_piper.handler:Synthesize(text="Sorry, I am not aware of any device called part of the class we ended up as my class was the good that the class we walked towards a whole class exam and we liked to relate to the mistake i had to put on my left knee to test her and i would be able to do it to the conclusion that the family of the state is only that they are at the right of the state. that's the most important thing to me.", voice=SynthesizeVoice(name='en_GB-alan-low', language=None, speaker=None))
DEBUG:wyoming_piper.handler:synthesize: raw_text=Sorry, I am not aware of any device called part of the class we ended up as my class was the good that the class we walked towards a whole class exam and we liked to relate to the mistake i had to put on my left knee to test her and i would be able to do it to the conclusion that the family of the state is only that they are at the right of the state. that's the most important thing to me., text='Sorry, I am not aware of any device called part of the class we ended up as my class was the good that the class we walked towards a whole class exam and we liked to relate to the mistake i had to put on my left knee to test her and i would be able to do it to the conclusion that the family of the state is only that they are at the right of the state. that's the most important thing to me.'
DEBUG:wyoming_piper.handler:input: {'text': "Sorry, I am not aware of any device called part of the class we ended up as my class was the good that the class we walked towards a whole class exam and we liked to relate to the mistake i had to put on my left knee to test her and i would be able to do it to the conclusion that the family of the state is only that they are at the right of the state. that's the most important thing to me."}
DEBUG:wyoming_piper.handler:/tmp/tmp_dylj27d/1712164416947351586.wav
DEBUG:wyoming_piper.handler:Completed request
DEBUG:wyoming_piper.handler:Sent info
DEBUG:wyoming_piper.handler:Sent info
EDIT: Resolved this issue by deleting and reinstalling both ATOM ECHO’s. I have no idea/clue what caused this/
All of a sudden, my two M5Stack Atoms Echo have become deaf as they don’t respond to their wake words anymore. The ESP32-S3-BOX-3 wake word is still up-and-running and working fine.
What the best way and order to start debugging this issue?
Home Assistant
Core 2024.5.5
Supervisor 2024.05.1
Operating System 12.3
Frontend 20240501.1
Studio Code Server (5.15.0), Home Assistant Google Drive Backup (0.112.1), InfluxDB (5.0.0), Grafana (10.0.0), Terminal & SSH (9.14.0), File editor (5.8.0), Samba share (12.3.1), Whisper (2.1.0), Piper (1.5.0), openWakeWord (1.10.0), ESPHome (beta) (2024.5.4), Matter Server (6.0.0), Syslog (0.1.0)
I tried this as a config (esphome 2024.6.4) - but I’m getting no response on the study_speaker
. Any ideas?
substitutions:
name: m5stack-atom-echo-b836b0
friendly_name: M5Stack Atom
packages:
m5stack.atom-echo-voice-assistant: github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main
esphome:
name: ${name}
name_add_mac_suffix: false
friendly_name: ${friendly_name}
api:
encryption:
key: tLGFM8s7/1EmCjvnWWFQlKUiCcpwzztS/N4qL89hrBU=
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
voice_assistant:
on_tts_end:
- homeassistant.service:
service: media_player.play_media
data:
entity_id: media_player.study_speaker # <- change this
media_content_id: !lambda 'return x;'
media_content_type: music
announce: "false"
Edit: Solved, I had to enable this option
It now replies on both devices. I’m still looking for a way to stop responses on the echo whilst still allowing timers etc to play on the echo
This works and disables the speaker. Although I am experiencing a local build issue:
substitutions:
name: m5stack-atom-echo
friendly_name: M5Stack Atom
packages:
m5stack.atom-echo-voice-assistant: github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main
esphome:
name: ${name}
name_add_mac_suffix: true
friendly_name: ${friendly_name}
api:
encryption:
key: someAPIkey=
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
# HarvsG's customisations
speaker:
- platform: i2s_audio
id: !extend echo_speaker
i2s_dout_pin: GPIO21 # <- It is actually on 22, so this disables the speaker
dac_type: external
mode: mono
voice_assistant:
on_tts_end:
- homeassistant.service:
service: media_player.play_media
data:
entity_id: media_player.study_speaker # <- change this
media_content_id: !lambda 'return x;'
media_content_type: music
announce: "true"