Add an option (under settings for Assist?) to enable text-to-speech output for assist.
I presently use a custom intent as follows:
conversation:
intents:
MyGetTemp:
- "What is the temperature in [the] {area}"
- "Tell me [the] temperature in [the] {area}"
- "How hot is it in [the] {area}"
intent_script:
MyGetTemp:
action:
service: "tts.cloud_say"
data:
options:
gender: male
language: en-AU
cache: false
entity_id: "media_player.upmpd_pizerow1_upnp_av"
message: >
{% set mydict = { 'living_room': 'living_room_6062_temperature', 'kitchen': 'kitchen_64e0_temperature', 'bedroom': 'bedroom_temperature', 'bathroom': 'bathroom_541d_temperature' } %}
{% set mysensor = mydict.get(area,'unavailable') %}
{% if mysensor == 'unavailable' %}
The temperature in the {{area}} is not available at present
{% else %}
It is {{ states('sensor.' + mysensor) | int }} °C
{% endif %}
speech:
text: >
{% set mydict = { 'living_room': 'living_room_6062_temperature', 'kitchen': 'kitchen_64e0_temperature', 'bedroom': 'bedroom_temperature', 'bathroom': 'bathroom_541d_temperature' } %}
{% set mysensor = mydict.get(area,'unavailable') %}
{% if mysensor == 'unavailable' %}
The temperature in the {{area}} is not available at present
{% else %}
It is {{ states('sensor.' + mysensor) | int }} °C
{% endif %}
So that the response to a query about the Temperature is output using tts.cloud_say on a Pi-based media player, as well as in text (the somewhat inaccurately named “speech” action).
You could also provide spoken output if the assist query is entered using a microphone rather than as text (i.e. audio in: audio out).
Regards,