How to output the TTS response to the device that received the trigger-sentence?

I have this automation to read the morning briefing stored in sensor.llm_cache using TTS:

alias: automation__morgentliches_update_vorlesen
description: ""
triggers:
  - trigger: conversation
    command:
      - guten morgen
conditions: []
actions:
  - set_conversation_response: ""
  - action: tts.speak
    metadata: {}
    data:
      cache: true
      media_player_entity_id: media_player.home_assistant_voice_095917_media_player
      message: >
        "{{ state_attr('sensor.llm_cache',
        'morning_greeting').get(now().strftime('%Y-%m-%d'), {}).get('me', 'THIS
        ALTERNATIVE TEXT DOES NOT WORK YET.') }}"
    target:
      entity_id: tts.piper_docker
mode: single

This works as expected on my Voice PE device media_player.home_assistant_voice_095917_media_player.

But I have two Voice PEs and the HA app on my mobile phone.

I would now like to modify this automation so that it automatically outputs the response to the Voice PE or mobile phone that received the trigger-sentence. How do I do this?

Have you tried setting this up as a custom sentence/intent script? It’s my understanding that the response should automatically go to the device that received the command.

If you want to specify a media player, I think you’ll have to find some other means of identifying it - last detected movement before the command was given, for example, or room-level presence detection.

Normally I think you would only do this if you wanted to direct the response to a different device from the one that received the command - a better quality speaker, for example.

Is piper.docker not the normal TTS agent for the device? If it is, there’s no need to squelch the response, just use your template there:

alias: automation__morgentliches_update_vorlesen
description: ""
triggers:
  - trigger: conversation
    command:
      - guten morgen
conditions: []
actions:
  - set_conversation_response: |
      {% set greetings = state_attr('sensor.llm_cache','morning_greeting') %}
      {% set date = now().strftime('%Y-%m-%d') %}
      {{ greetings.get(date, {}).get('me', 'THIS ALTERNATIVE TEXT DOES NOT WORK YET.') }}
mode: single

Otherwise you could use the device_id, but you would need to branch the logic if you want to make sure the Conversation text input works…

alias: automation__morgentliches_update_vorlesen
description: ""
triggers:
  - trigger: conversation
    command:
      - guten morgen
conditions: []
actions:
  - variables:
      message: |
        {% set greetings = state_attr('sensor.llm_cache','morning_greeting') %}
        {% set date = now().strftime('%Y-%m-%d') %}
        {{ greetings.get(date, {}).get('me', 'THIS ALTERNATIVE TEXT DOES NOT WORK YET.') }}
  - if:
      - "{{ trigger.device_id is not none }}"
    then:
      - action: tts.speak
        metadata: {}
        data:
          media_player_entity_id: |
            {{ device_entities(trigger.device) | select('match', 'media_player.') | first }}
          message: "{{ message }}"
        target:
          entity_id: tts.piper_docker
    else:
      - set_conversation_response: "{{ message }}"
mode: single

Thank for your response and sorry for the delay…

Is piper.docker not the normal TTS agent for the device

This is indeed the default TTS agent. So thank you for point out the use of set_conversation_response:.

I have tried using it and have found that the output gets cut off after a while on one of the two Voice PEs (the output is rather long).

In my code example I am using:

    data:
      cache: true

I don’t know, if and how to do this with set_conversation_response.

Could this be the issue? How would I fix this in case?

It took me a while to figure this out too, I’m pasting the answer to the original question (you don’t need the phone notification obviously, it’ll help you debug if necessary though).

Good luck, hope this helps.

alias: Voice Location Test - Where Am I (Dynamic)
description: >
  Voice automation to identify which voice assistant heard the command
  and respond from that device with its location.
triggers:
  - trigger: conversation
    command:
      - where are you
      - where am I
      - what is your location
      - where is this
      - what room is this
      - which room am I in
conditions: []
actions:
  - variables:
      # Get area name from the trigger device
      area_name: "{{ area_name(trigger.device_id) }}"

      # Map device UUID to media player entity
      # Device IDs (examples - replace with your actual device UUIDs):
      # - Device 1 (1a2b3c): a1b2c3d4e5f6789012345678abcdef01
      # - Device 2 (4d5e6f): fedcba9876543210fedcba9876543210
      media_player_entity: >
        {% if trigger.device_id == 'a1b2c3d4e5f6789012345678abcdef01' %}
          media_player.home_assistant_voice_1a2b3c_media_player
        {% elif trigger.device_id == 'fedcba9876543210fedcba9876543210' %}
          media_player.home_assistant_voice_4d5e6f_media_player
        {% else %}
          media_player.living_room_speaker
        {% endif %}

  - set_conversation_response: ""
  - parallel:
      - data:
          message: >
            <speak>I heard you. I am located in the {{ area_name }}.</speak>
          cache: true
        target:
          entity_id: "{{ media_player_entity }}"
        action: tts.google_cloud_say
      - device_id: abcdef1234567890abcdef1234567890
        domain: mobile_app
        type: notify
        message: "I heard you. I am located in the {{ area_name }}. Device: {{ trigger.device_id }} | Media Player: {{ media_player_entity }}"
        title: Voice Location Test (Dynamic)

mode: single

I use a template sensor that sets the state of the sensor to the device ID of the currently active PE Voice, or to “None Active” if idle:

- trigger:
    - trigger: state
      entity_id: assist_satellite.ha_voice_1_assist_satellite
    - trigger: state
      entity_id: assist_satellite.ha_voice_bedroom_assist_satellite
    - trigger: state
      entity_id: assist_satellite.ha_voice_dining_assist_satellite 
  sensor:
    - name: Assist Satellite Name
      unique_id: fb1c91d9-da14-4b76-bde3-xxxxxx
      state: |
        {% if trigger.to_state.state == 'idle' %}
          None Active
        {% else %}  
          {{ trigger.entity_id }}
        {% endif %}

Then I use that sensor state when making an announcement"

action: assist_satellite.announce
metadata: {}
data:
  message: "{{the_announcement}}"
  preannounce: true
target:
  entity_id: "{{ states('sensor.assist_satellite_name') }}"