Google Home voice interactions with multiple speakers

Updated to include area mapping. Thanks to @TheFes for the code.
Updated to add null characters to the virtual bulb config.
Updated to generate a spoken error message. Thanks to @HairyPorter23 for the code.

The integration of Google Home (GH) and Home Assistant (HA) has a key limitation: it lacks “source speaker” metadata. When a GH voice routine requests sensor information from HA, it cannot identify which device made the request, preventing a response to the correct speaker

This issue can be resolved by using Google’s Room-Awareness feature to activate location-specific input buttons. This enables a straightforward HA automation to determine the source room and respond to the appropriate speaker. A Virtual Template Light is utilised to manage multiple template text sensors, which contain the replies. These are indexed to the light’s brightness level.

A simple GH routine can be set up to trigger a “generic” input button and adjust the brightness of the virtual bulb.

Because Google Home is “room aware", the correct input button is activated. This button initiates a single HA automation that identifies the room and selects the right speaker. The brightness level determines the correct response through the text sensors.

Create the Entities in Home Assistant

You can use the Helper UI (Settings > Devices & Services > Helpers) to create most of these components, but the Virtual Light is the exception.

  • Input Buttons: Create buttons (e.g., kitchen_answer, livingroom_answer) and assign them to their respective Areas in HA.
  • Google Cast Speakers: Ensure your Google speakers are also assigned to the same HA Areas as their corresponding buttons.
  • Template Sensors: Create sensors explicitly named 1_voice, 2_voice, 3_voice, etc., via the Template Sensor Helper. The number in the sensor name corresponds to the brightness percentage of the virtual bulb (e.g., 1% triggers sensor.1_voice). In the “State Template” box, enter your message logic (e.g., The car is at {{ states(‘sensor.car_battery’) }} percent).
  • Virtual Light: Define this in configuration.yaml. Note: You cannot use the Helper UI to create this light because the UI does not support creating a template light with the custom brightness settings and empty service calls required for this routing logic.
template:
  - light:
      - name: "virtual"
        unique_id: "virtual"
        turn_on:  []
        turn_off: []
        set_level: []

Connect and Sync to Google Home

To make these entities visible in the Google Home app, you need to bridge them using one of the following:

  • Nabu Casa (Home Assistant Cloud): The easiest “one-click” method.
  • Matter Bridge: Uses the Matter router in Google devices to expose entities locally.
  • Google Assistant integration: A free method using a Google Cloud project.

The Crucial Step: Once synced, go into the Google Home App and move each input_button into its corresponding room (e.g., input_button.kitchen_answer goes into the “Kitchen” room).

Set up the Google Home Routine

For every question you want to ask, create a routine in the Google Home app:

  1. Voice Starter: “What is the car battery level?”
  2. Action 1: Create a custom command, “Turn on answer” (Google’s room-awareness triggers only the button in your current room).
  3. Action 2: Turn on the virtual bulb and set the virtual bulb to 1%". (change % to match the voice sensor.)

Automation (with Room Mapping)

This automation routes the audio to the specific media player associated with the triggered button. Change input buttons as required.

Im using tts.speak and tts.google_translate_en_com for the text-to-speech. Change or install as required.

alias: Google Voice Response Handler
triggers:
  - trigger: state
    entity_id:
      - input_button.kitchen_answer
      - input_button.livingroom_answer
      - input_button.bedroom_answer
      - input_button.frontroom_answer
actions:
  - action: tts.speak
    target:
      entity_id: tts.google_translate_en_com
    data:
      media_player_entity_id: >
        {% set area = area_id(trigger.entity_id) %}
        {{ integration_entities('cast') | select('in', area_entities(area)) | first }}
      cache: true
      message: >
        {% set index = (state_attr('light.virtual', 'brightness') | int(0) / 2.55) | round(0) %}
        {% set entity = "sensor." ~ index ~ "_voice" %}
        {% if has_value(entity) %}
          {{ states(entity) }}
        {% else %}
          Error detected.  No voice sensor found with Index value {{ index }}
        {% endif %}

  - action: light.turn_off
    target:
      entity_id: light.virtual

Understanding the Mapping:

The automation identifies the area_id of the triggered button and filters all cast integration entities to find one residing in that same area.

Note a key limitation: this logic is designed for areas with exactly one Cast speaker. Because the template uses the | first filter, if multiple speakers exist in one HA Area, the automation will always default to the first one found in the list rather than distinguishing between them.

I hope the above is of some use. I have tried using volume hacks or sending silent tones to try and identify the speakers, but I could never make these work reliably.

You may want to add an announcement in the GH routine saying something like ‘Please wait’ at the beginning to slow down the interaction, as sometimes Google speakers miss the first couple of words.

I don’t use the automation editor; I use Node-RED, so please let me know if the automation can be improved/simplified, as I used Google Gemini to help convert my Node-RED flows.

1 Like

This is actually quite clever!
I’m not using GH anymore. I had an own solution in place where I used ambient sounds, and used the player playing the sound to determine to which one the response should be sent, but this is more effective.

I do have GH installed, so I wanted to have a look, but it seems like the modern automations don’t have the option to add custom commands. If I edit an old routine I created in the past, I do see the option. Is it still possible to create new routines?

Regarding the setup itself. You could create the template light in the GUI, and for the actions just set a delay of 0 seconds so there is an action (which doesn’t do anything) Or another similar action which just doesn’t actually do a thing. That way you can create it using the GUI.

And instead of creating a sh*tload of input texts, you could also add a mapping for that. Of course that does mean maintaining them will be more difficult, as you need to edit the automation for that.

For the speaker map you could use the area instead of creating a mapping. Put the input button in the same area as the GH device, and target the TTS to the GH devices in that area.

Thank you for your kind words. Routines are buried in the Google Home app.

Home settings
Google Assistant

Then scroll down a long list till you find routines. You’re quite right; automations don’t work.

Clever using the delay will update the helper UI for the virtual light; I like simple :slight_smile:

I like using text sensors; they are simple to maintain, and the GUI in the helper UI proves they are correct. Editing automations are a pain; that’s why I use Node-RED. One shifted character and YAML go into meltdown

Does tts.speak support areas ?

Quick note. Creating a virtual bulb in UI helper does not add any brightness controls :frowning:

I’m sure there must be a way, but for once YAML is simpler.

You can use templates in case it doesn’t.

Something like

{% set area = area_id(trigger.entity_id) %}
{{
  integration_entities('cast')
    | select('in', area_entities(area)
    | first
}}
2 Likes

Hello and thanks to both of you for your solutions.
I wonder what the advantage of the lightbulb is?

  • I set up the 3 buttons as described
  • I assigned them to the rooms of the Google speakers in HA
  • I exposed the buttons zu Google Home, so there they are in the same room as the speaks
  • I built a test routine in GH as described and thanks to the room recognition the appropriate button for each room has been activated
  • With that and the trigger IDs I built an automation in HA to give voice feedback to the speaker (as I know the button, I know the room, so I know the speaker)

What is the use case of the lightbulb and it’s percentage setting as an intermediate step?

Cheers

The idea with the light bulb is to create just one “universal automation”. The lightbulb can be set with up to 99 different brightnesses. These can be matched with a template sensor which contains the reply. So 1% = sensor.1_reply, 2% = sensor. 2. reply, etc. This means I do not have to write separate automations for every interaction with Google Home. Just set up the reply to the question. Here is an example.

When I say to my assistant, ‘Car status’

Actions

Turn on Answer
Virtual bulb turn-on set to 1%

The automation triggers automatically find the correct speaker because the answer button is in the same Home Assistant area as the speaker. I construct the template sensor in the helper UI as follows.

The car has travelled {{states('sensor.cupra_born_odometer')| round}} miles. It is currently {{states('lock.cupra_born_door_locked')}} . The current range is {{states('sensor.cupra_born_electric_range')|round}} miles with a battery level of {{states('sensor.cupra_born_battery_level')}} %

So for every question I need to ask Google, I just change the brightness level and create another template sensor.

If you only have 1 question, it’s complete overkill :slight_smile:

Aaaah! I thought 1_voice, 2_voice,… was somehow the same answer but for the corresponding speaker, so speaker 1 (livingroom) receives 1_voice etc.
Now I’ve read your original post again and everything now makes sense to me. :grinning::+1:t2:
Maybe I’ll give it a try with more than 1 answers, but I got stuck at the bulb and I didn’t keep myself busy because it worked with 1 answer without it. :smile:
Thanks again and nice car btw. :slightly_smiling_face:

1 Like

Just to clarify. The automation finds the correct speaker via Home Assistant Areas. The bulb only matches the template sensor. You don’t have to rename the speaker. Just put the speaker and the matching answer button in the correct Google Home room and in matching Home Assistant areas. This means the only unique items in the automation are the input buttons themselves.

1 Like

Yes, I understood, and finding the correct speaker with the room recognition did work. My misunderstanding was, that I thought I need the different 1_voice, 2_voice, 3_voice with the exact same answer (to the exact same question) to send it to the correct speaker. Now I understand, that 1_voice contains a different answer (to a different question) than 2_voice and 3_voice. :+1:t2:

1 Like

Sorry for having to ask again, I’m an absolute beginner. Pasting your coding for the bulb into the config brings (expectable) error messages. So I tried it with the GUI: Helper → Template → Light Bulb and as actions the delay as I don’t want to mess my config. But the status of the light bulb (the field under the name) is a mandatory field. When I enter TRUE, it’s always on. ENABLE makes it unknown. You can’t leave it empty.
What do I write here?

Thanks and cheers

The joys of Home Assistant: :slight_smile: The UI helper is very fussy and does not like creating a “blank” light . I haven’t found a way of creating one that works correctly .

You need to create the light manually in the configuration.yaml file. To do this, add the file editor “App” from Settings, App.

Once added, open file editor and find the configuration.yaml file in the left-hand list . Click on the three dots and download a copy of your configuration. yaml . This will be a backup “just in case”. Then open the configuration.yaml file. YAML files are dependent on the spacing in the file; be very careful with the spacing and changing the file layout. If the spacing is wrong the file will not work !

Look for the template: text in the 1st column of the configuration.yaml. Hopefully this is missing (if not, show me what settings you have there) . If so, go to the bottom of the configuration.yaml and copy the text for the virtual light (use the copy option in the top right corner of the text box). Make sure the spacing is consistent with the rest of the file (the file editor should give a warning if the spacing is illegal; there will be a green tick in the top right corner if file good) ; if not, adjust the spacing. The template: must be in the first column. Save the file.

Go to the developers’ tools in the settings menu and click on YAML, then click on the check configuration file. If this is OK, look down the list of YAML configuration reloading and click on Template entries. This will load your template: and the virtual light should be created !

Any errors show me your configuration.yaml (redact any passwords etc)

As my system is pretty new, the config still is rarely filled :slight_smile:

# Loads default set of integrations. Do not remove.
default_config:

# Load frontend themes from the themes folder
frontend:
  themes: !include_dir_merge_named themes
  extra_module_url:
    - /hacsfiles/material-you-utilities/material-you-utilities.min.js
panel_custom:
  - name: material-you-panel
    url_path: material-you-configuration
    sidebar_title: Material You Utilities
    sidebar_icon: mdi:material-design
    module_url: /hacsfiles/material-you-utilities/material-you-utilities.min.js

automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml

When I paste your bulb coding starting from line 19, I (still) have the green checkmark top right. This means that the config will still work?

EDIT: New users can only put one embedded media item in a post, I have to split this answer.

Because I used the App “Studio Code Server” before and pasting it there, it gives me the following errors:

And I’m not even sure if I can save the config at all (in SCS).

Config is fine. Save the file with the file editor and test with the developers’ tools. If the developer tools say, “Configuration will not prevent Home Assistant from starting!”, then you’re good; you won’t break a restart. Don’t restart until you have checked!!

I think the Studio code is using the same syntax check as the UI Helper, doesn’t understand why you want a light that does nothing!

I will load Studio Code Server and test on my setup. :slight_smile:

1 Like

OK tested with SCS. Change the lines in error to the following:

        turn_on: []
        turn_off: []
        set_level: []
1 Like

omg, it’s working! Thank you so much! Not all heroes wear capes :slight_smile:

Maybe you could add a third step to your description of the Google Home routine: “Turn off answer” to make it idiot-proof. :smile:

Also I ask the AI to give me a fallback, when the bulb has no brightness, so no answer is found:

{% set bri = state_attr('light.virtual', 'brightness') | default(0, true) | int(0) %} {% set pct = (bri / 2.55) | round(0) | int %} {% set pct = [0, [pct, 100] | min] | max %} {% set entity = 'sensor.' ~ pct ~ '_voice' %} {{ iif(has_value(entity), states(entity), 'No matching answer.') }}

1 Like

Thank you for your kind words. I have made various attempts over the years, trying to get a workable speech parser. Look up my posts. The virtual bulb was one of my better ideas. Pity Home Assistant makes it so difficult. to create.

Good idea on the no answer; maybe better to say “sensor.xx_voice does not exist”.

When testing, I add an announce action first, saying “Please wait,” into the Google routine; that way I know the routine has been triggered. This can also be used to wake up the speaker, as sometimes my speakers miss the first couple of words.

The other thing I didn’t point out is how powerful the template options are in the sensor. Basically you can make the answer into a mini program. Here is an example: I have a dishwasher, which has an API so when I close the door, it automatically sets a timer that comes on at midnight to take advantage of the cheap rate electricity. However, I forgot to shut the door, or somebody else opened the door. and the next morning, dirty dishes. I set up the answer. for is the door open?

{% set op_state = states('sensor.dishwasher_operation_state') | lower %}
{% set door = states('sensor.dishwasher_door') | lower %}
{% set seconds = states('number.dishwasher_start_in_relative') | int(0) %}

{% set hours = (seconds // 3600) %}
{% set minutes = ((seconds % 3600) // 60) %}

{% set h_text = "one hour" if hours == 1 else hours | string + " hours" %}
{% set m_text = "one minute" if minutes == 1 else minutes | string + " minutes" %}

{% if door == 'open' and seconds > 0 %}
  The dishwasher is ready to start a delay, but the door must be closed to begin the {{ h_text }} and {{ m_text }} countdown.
{% elif op_state == 'delayedstart' or (door == 'closed' and seconds > 0) %}
  The dishwasher is in delayed start mode and will begin in 
  {%- if hours > 0 and minutes > 0 %} {{ h_text }} and {{ m_text }}.
  {%- elif hours > 0 %} {{ h_text }}.
  {%- else %} {{ m_text }}.
  {%- endif %}
{% elif op_state == 'running' %}
  The dishwasher is currently running.
{% elif door == 'open' %}
  The dishwasher door is open.
{% elif op_state == 'ready' %}
  The dishwasher is ready and waiting for a cycle to be started.
{% else %}
  The dishwasher is currently {{ op_state }}.
{% endif %}

All set up in the sensor. So I can use it in alerts as well.

Have fun :slight_smile: