Hi,
I was following this guide $13 voice assistant for Home Assistant - Home Assistant but am struggling to use the Atom Echo as an assist device.
Having completed the setup as per the instructions, the Atom Echo is running standalone connected via Wifi and I am seeing it as an ESPHome device in Homeassistant with 8 entities. The ESPHome dashboard is available and I can configure assist pipeline etc. and am seeing the button switching status when pressed.
However, when trying to send voice commands, nothing is happening. The assist pipeline is waiting for the recording to finish and I’m getting the following error in whisper:
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name=‘Task-14’ coro=<AsyncEventHandler.run() done, defined at /usr/local/lib/python3.9/dist-packages/wyoming/server.py:28> exception=ValueError(“can’t extend empty axis 0 using modes other than ‘constant’ or ‘empty’”)>
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/wyoming/server.py”, line 35, in run
if not (await self.handle_event(event)):
File “/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/handler.py”, line 69, in handle_event
segments, _info = self.model.transcribe(
File “/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/faster_whisper/transcribe.py”, line 124, in transcribe
features = self.feature_extractor(audio)
File “/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/faster_whisper/feature_extractor.py”, line 152, in call
frames = self.fram_wave(waveform)
File “/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/faster_whisper/feature_extractor.py”, line 98, in fram_wave
frame = np.pad(frame, pad_width=padd_width, mode=“reflect”)
File “/usr/local/lib/python3.9/dist-packages/numpy/lib/arraypad.py”, line 819, in pad
raise ValueError(
ValueError: can’t extend empty axis 0 using modes other than ‘constant’ or ‘empty’
I’m running home assistant core 2023.11.3 as docker compose image and have added and connected piper, whisper and openwakeword as separate images (latest)via whyoming protocol. The assist pipeline is working fine when triggered from my notebook via voice commands. Presumably whisper is not getting any audio.
The 8 entities associated with the Atom Echo are the following:
- binary_sensor.m5stack_atom_echo_0f9900_assist_in_verwendung
- binary_sensor.m5stack_atom_echo_0f9900_button
- button.m5stack_atom_echo_0f9900_factory_reset
- light.m5stack_atom_echo_0f9900_m5stack_atom_echo_0f9900
- select.m5stack_atom_echo_0f9900_assist_pipeline
- select.m5stack_atom_echo_0f9900_zu_ende_gesprochen_erkennung
- switch.m5stack_atom_echo_0f9900_use_listen_light
- switch.m5stack_atom_echo_0f9900_use_wake_word
I have seen in screenshots of other users that they have an audio volume slider for the speaker in their default ESPHome dashboard that I’m missing, presumable for some reason the microphone and speaker entities of the Atom Echo have not been added to Home Assistant / ESPHome.
Any ideas how to add mic and speaker here? I have tried the same process referenced above three times, each time with the same result, and am running out of ideas.
Appreciate any support in getting this fixed.
Many thanks & kind regards
Wolfgang