Help create automation from Assist response_type error

I’m trying to find a way, that I can TTS to a media player, when Assist gives an error response type (mostly for the response ‘Sorry, I couldn’t understand that’). This way, I can get feedback when talking to my satellites (they send their response to a chosen Google Nest, as I’m not using speakers on my satellites).

Here’s some examples I’m seeing in the pipeline debug. So how can I utilise the error code ‘no_intent_match’ in some way, so I can create a custom response to TTS to a Google Nest?

conversation_id: null
device_id: null
intent_output:
  response:
    speech:
      plain:
        speech: Sorry, I couldn't understand that
        extra_data: null
    card: {}
    language: en
    response_type: error
    data:
      code: no_intent_match
  conversation_id: null

Here’s an example, where I get the correct response (just to see the debug difference).

conversation_id: null
device_id: null
intent_output:
  response:
    speech:
      plain:
        speech: The time is 21:00
        extra_data: null
    card: {}
    language: en
    response_type: action_done
    data:
      targets: []
      success: []
      failed: []
  conversation_id: null

Would really appreciate some help on this, because if I can crack this, then I can rely on using my Assist satellites even more, by ensuing I know that the request/intent wasn’t understood :slight_smile:

Did you ever find a solution for this?

Hi Christian, yeah I did. Maybe you haven’t seen it, but it’s possible to use another media player (can be a Google nest, ESP devices etc) to give the response, which is what I mostly do now. Follow this video, and it will help you set it up

1 Like