Configuration problem with M5 Atom Echo

I have a newly installed Atom Echo device where the entity does detect my button presses. When I press the button, the specified light doesn’t illuminate, although I can switch it on and off using the switch for its entity in HA.

Any idea why the voice assistant is not triggering here? I’ve looked at all logs I know about for each time I pressed the Atom’s button and can see that button on and off events were detected, but no indication that the system then tried to do anything. I know I can’t test voice input because I haven’t yet managed to set up HTTPS, but I’m not getting any errors about that when I press the button.

Does the error on !lambda return x; mean something is missing? I had to comment that out to get the code to compile but had seen no errors in the Atom install logs.

(Latest version of HAOS with all available updates on X86 platform)

Thanks.

voice_assistant:
  microphone: atom_echo_microphone
  on_start:
    - light.turn_on:
        id: led
        blue: 100%
        red: 0%
        green: 0%
        effect: none
  on_tts_start:
    - light.turn_on:
        id: led
        blue: 0%
        red: 0%
        green: 100%
        effect: none
  on_tts_end:
  # - media_player.play_media: !lambda return x; 
  # Symbol lambda gives "not recognised"
    - light.turn_on:
        id: led
        blue: 0%
        red: 0%
        green: 100%
        effect: pulse
  on_end:
    - delay: 1s
    - wait_until:
        not:
          media_player.is_playing: media_out
    - light.turn_off: led
  on_error:
    - light.turn_on:
        id: led
        blue: 0%
        red: 100%
        green: 0%
        effect: none
    - delay: 1s
    - light.turn_off: led

binary_sensor:
  - platform: gpio
    pin:
      number: GPIO39
      inverted: true
    name: Button
    id: echo_button
    # The next two triggers were missing from the initial config. Even after adding them,
    # no light illuminates on pressing the button although this is detected by the entity
    on_press:
      - voice_assistant.start:
    on_release:
      - voice_assistant.stop:
    on_multi_click:
      - timing:
          - ON FOR AT MOST 350ms
          - OFF FOR AT LEAST 10ms
        then:
          - media_player.toggle: media_out
      - timing:
          - ON FOR AT LEAST 350ms
        then:
          - voice_assistant.start:
      - timing:
          - ON FOR AT LEAST 350ms
          - OFF FOR AT LEAST 10ms
        then:
          - voice_assistant.stop:
1 Like

Here’s a snippet from my log viewer as I pressed the Atom’s button. I see now that there’s one error (that was in red) about an exception and another about the pipeline that wasn’t highlighted in HA.

2023-07-29 14:33:39.570 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/esphome/voice_assistant.py", line 330, in run_pipeline
    await async_pipeline_from_audio_stream(
  File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/__init__.py", line 81, in async_pipeline_from_audio_stream
    await pipeline_input.validate()
  File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 767, in validate
    await asyncio.gather(*prepare_tasks)
  File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 578, in prepare_text_to_speech
    raise TextToSpeechError(
homeassistant.components.assist_pipeline.error.TextToSpeechError: Pipeline error code=tts-not-supported, message=Text-to-speech engine tts.google_en_com does not support language en-gb or options {'audio_output': 'mp3'}
2023-07-29 14:33:41.410 DEBUG (MainThread) [aioesphomeapi.connection] atom-echo-277 @ 192.168.1.38: Got message of type <class 'api_pb2.BinarySensorStateResponse'>: key: 977454165
2023-07-29 14:33:41.411 DEBUG (MainThread) [homeassistant.components.esphome.entry_data] atom-echo-277: dispatching update with key 977454165: BinarySensorState(key=977454165, state=False, missing_state=False)
2023-07-29 14:33:41.426 DEBUG (MainThread) [aioesphomeapi.connection] atom-echo-277 @ 192.168.1.38: Got message of type <class 'api_pb2.VoiceAssistantRequest'>: 
2023-07-29 14:33:41.550 DEBUG (MainThread) [aioesphomeapi.connection] 192.168.1.38: Sending <class 'api_pb2.PingRequest'>: 
2023-07-29 14:33:41.551 DEBUG (MainThread) [aioesphomeapi._frame_helper] Sending frame: [fb887ecc645faaef6522d0d6b607202a52109059]
2023-07-29 14:33:41.800 DEBUG (MainThread) [aioesphomeapi.connection] atom-echo-277 @ 192.168.1.38: Got message of type <class 'api_pb2.PingResponse'>: 
2023-07-29 14:34:01.552 DEBUG (MainThread) [aioesphomeapi.connection] 192.168.1.38: Sending <class 'api_pb2.PingRequest'>: 
2023-07-29 14:34:01.554 DEBUG (MainThread) [aioesphomeapi._frame_helper] Sending frame: [1f7566a2b63a395fc3479203d48a52212dc8c1b7]
2023-07-29 14:34:01.680 DEBUG (MainThread) [aioesphomeapi.connection] atom-echo-277 @ 192.168.1.38: Got message of type <class 'api_pb2.PingResponse'>: 

I have spent an afternoon struggling to get an Atom Echo working as well. I have finally got it working and here is my configuration (less sensitive bits):

substitutions:
  name: "atom-office"
  friendly_name: "Atom Echo"
esphome:
  name: "${name}"
  friendly_name: "${friendly_name}"
  name_add_mac_suffix: true
  project:
    name: m5stack.atom-echo-voice-assistant
    version: "1.0"
  min_version: 2023.7.0

esp32:
  board: m5stack-atom
  framework:
    type: esp-idf

logger:

dashboard_import:
  package_import_url: github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main

# Enable Home Assistant API
api:
  encryption:
    key: "[blank]"
ota:
  password: "[blank]"

wifi:
  ssid: "[blank]"
  password: "[blank]"

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: "[blank]"
    password: "[blank]"

  # manual IP
  manual_ip: [blank]
    static_ip: [blank]
    gateway: [blank]
    subnet:  [blank]

improv_serial:

i2s_audio:
  i2s_lrclk_pin: GPIO33
  i2s_bclk_pin: GPIO19

microphone:
  - platform: i2s_audio
    id: echo_microphone
    i2s_din_pin: GPIO23
    adc_type: external
    pdm: true

speaker:
  - platform: i2s_audio
    id: echo_speaker
    i2s_dout_pin: GPIO22
    dac_type: external
    mode: mono

voice_assistant:
  microphone: echo_microphone
  speaker: echo_speaker
  silence_detection: true
  on_listening:
    - light.turn_on:
        id: led
        blue: 100%
        red: 0%
        green: 0%
        brightness: 50%

  on_start:
    - light.turn_on:
        id: led
        blue: 100%
        red: 0%
        green: 0%
        brightness: 100%
        effect: pulse

  on_tts_start:
    - light.turn_on:
        id: led
        blue: 0%
        red: 0%
        green: 100%
        brightness: 100%
        effect: none

  on_tts_end:
    - light.turn_on:
        id: led
        blue: 0%
        red: 0%
        green: 100%
        brightness: 100%
        effect: pulse

  on_end:
    - delay: 100ms
    - wait_until:
        not:
          speaker.is_playing:
    - light.turn_off: led

  on_error:
    - light.turn_on:
        id: led
        blue: 0%
        red: 100%
        green: 0%
        brightness: 100%
        effect: none
    - delay: 1s
    - light.turn_off: led

binary_sensor:
  - platform: gpio
    pin:
      number: GPIO39
      inverted: true
    name: Button
    disabled_by_default: true
    entity_category: diagnostic
    id: echo_button
    on_click:
      - if:
          condition: voice_assistant.is_running
          then:
            - light.turn_off: led
            - voice_assistant.stop:
          else:
            - light.turn_on:
                id: led
                blue: 100%
                red: 0%
                green: 0%
                brightness: 50%
                effect: none
            - voice_assistant.start:

light:
  - platform: esp32_rmt_led_strip
    id: led
    name: None
    disabled_by_default: true
    entity_category: config
    pin: GPIO27
    default_transition_length: 0s
    chipset: SK6812
    num_leds: 1
    rgb_order: grb
    rmt_channel: 0
    effects:
      - pulse:
          transition_length: 250ms
          update_interval: 250ms

The key step for me was having it on the same network as my HA server (most IoT items go on a vlan), and using a manual IP address. Even after this, I have found that Voice Assistant does not always pick up button presses, although I can see them in the ESP device log.

The other thing I would like to have been able to do is use the Atom Echo as an output device so that I can send notifications to it, but I have not found a way to do that yet.

1 Like

Thanks for trying to help. After I’d spent a vast amount of time trying to get the device to work, it stopped connecting altogether, giving persistent authentication errors even after reinstalling from scratch. Mine was on the same network as the server and had a manual IP address defined in the file - although I was never sure from reading the instructions whether the address should be the Echo’s, the HA server’s or the one for the mysterious extra ESP device with a different Mac address that appeared on my router.

I then decided life is too short, gave up and put it in the bin. Thankfully it was a low cost device: I resent the time I wasted on it (down to the software, I’m sure) rather than the money.

Nice solution, considering how many people want and can’t get these devices. Oh, and e-waste. Good job.

Also, git gud.

It was either that or go mad. If I knew anyone “hardcore” enough to be able to use the thing, I wouldn’t have wasted five days of my life on it getting nowhere.

There’s a pretty detailed official tutorial on getting these to work, and following it took me all of 5 minutes to get mine up and running without any drama:

Also, Lewis did a video on setting this up. M5 Atom Echo starts about half way through the video, but it’s probably worth watching the whole thing as he goes over the pipelines in HA first that are a prerequisite to getting the atom working:

Start with those before complicating matters with a custom config.

1 Like

I was looking to do the same. The problem is that the tts.speak service requires a media_player, but the voice assist config at firmware/voice-assistant/m5stack-atom-echo.yaml at 3462e80829099395613b550dbfa28518292ae01e · esphome/firmware · GitHub is set to use the speaker component rather than the media_player component. There is an alternate config at media-players/m5stack-atom-echo.yaml at 836631e8ca446c84485bba2199f21fdfb398ba89 · esphome/media-players · GitHub that uses the media_player component, but I found the assist voice response wasn’t working, and I don’t like the hold to talk method it uses to trigger the voice assistant (it uses click for play/pause).

So, here’s my hybrid that uses the former config as the base, but adds a media_player component from the later that can be used with tts.speak, and switches to the arduino framework (required for media_player, unfortunately):

substitutions:
  name: m5stack-atom-echo
  friendly_name: M5Stack Atom Echo
packages:
  # Does assist, but lacks media player component (uses speaker component instead), so cannot be tts.speak target. Uses esp-idf framework:
  m5stack.atom-echo-voice-assistant: github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main
  # Has media player and assist, but assist voice response doesn't work. Uses arduino framework:
  #m5stack.atom-echo-media-player: github://esphome/media-players/m5stack-atom-echo.yaml@main
esphome:
  name: ${name}
  name_add_mac_suffix: false
  friendly_name: ${friendly_name}
api:
  # REDACTED

wifi:
  ssid: !secret wifi_ssid
  password: !secret wifi_password


# media_player component requires arduino framework:
esp32:
  framework:
    type: arduino

# Added media_player component excerpt from https://github.com/esphome/media-players/blob/836631e8ca446c84485bba2199f21fdfb398ba89/m5stack-atom-echo.yaml
media_player:
  - platform: i2s_audio
    id: media_out
    name: None
    dac_type: external
    i2s_dout_pin: GPIO22
    mode: mono
1 Like

Hello, I do understand the frustration of @TyneBridges !

I am struglling for days and not yet throw it out :stuck_out_tongue:

I somehow cannot get it going. When I press the button it’s blue, won’t pulse blue and when finished the logs state

I2S: DMA queue destroyed

Can someone please tell what this means and why it won’t interact proper… don’t understand what to do next, to get it resolved…

3 Likes

Based on the latest version 2023.10.0b2, your config is not working. The media_player i2s_audio needs the arduino and the github://esphome/firmware/voice-assistant/m5stack-atom-echo.yaml@main needs esp-idf.

2 Likes

You can only get the media player to work if you use push to talk with ardruino type. If you want a wake word, you can’t use the ardruino type and therefore no media player.

1 Like