Google text automation does not work when triggered

I’m working on an automation (full code below) that will have Google text speak to through my Sonos speakers once I trigger the automation. So far, it works if I manually type in the text via the mediaplayer interface at the home assistant, but the automation fails - even if I trigger it manually. Again - the home-assistant.log leaves a lot of questions.

Here is my code from automation.yaml:

  alias: Welcome home automation
  trigger:
  - entity_id: device_tracker.iphone8
    from: not_home
    platform: state
    to: home
  condition: []
  action:
  - data:
      entity_id: media_player.all_speakers
    service: media_player.turn_on
  - delay: 00:00:02
  - data:
      entity_id: media_player.all_speakers
      message: 'Welcome home, Sire! It is gooood to be the KING!'
    service: tts.google_say
  - delay: 00:00:04
  - data:
      entity_id: media_player.all_speakers
    service: media_player.turn_off

Any thoughts or ideas would be helpful. As stated, the sonos speakers works well and I can use them via the home assistant but I cannot seem to get the automation to work.

Have you looked at this automation example? It works perfectly for me. Just pay attention to the line you need to change to your tts.google_say.

Also the automation above likely will trigger before (or very close) you are actually home, depending on what tracker you use…

Thanks for replying Piggyback.

Yes, I have seen this example before. However, at the time it did look too advanced so I went after another example. However, I’ll try this again this evening to see if I can get it to work.

It’s really not that complex as you can mostly use what is there. Let me show you what I have. as I expanded the standard a little bit:

Configuration.yaml -> Selected some standard texts and selector of which of the Sonos players I would like to play the message on:

input_select:
  sonos_outputdevice:
    name: Sonos output device voice messages
    options:
      - Mam en Pap
      - Julia
      - Dining Room
      - Living Room
    initial: Living Room
  sonos_message:
    name: message to say
    options:
      - Good night
      - Dinner time
      - Intruder
      - Custom other
    initial: Dinner time

Automation (in my case in automation.yaml) to call the script to play the message. It calls script sonos_say later with these selected parameters based on the prior input selects:

- alias: 'test'
  trigger:
  - platform: state
    entity_id: input_boolean.test_sonos
  action:
    - service: script.sonos_say
      data_template:
        volume: 0.3
        message: >
            {% if is_state("input_select.sonos_message", "Good night") %}
              'Good night'
            {%-elif is_state("input_select.sonos_message", "Dinner time") %}
              'Dinner time'
            {%-elif is_state("input_select.sonos_message", "Intruder") %}
              'You are an intruder! You are not welcome here! Police is alarmed!'
            {%-elif is_state("input_select.sonos_message", "Custom other") %}
              'Daddy is on his way home!'            
            {% else %}
              none
            {% endif %}
        delay: '00:00:05'
        sonos_entity: >
            {% if is_state("input_select.sonos_outputdevice", "Dining Room") %}
              media_player.dining_room
            {%-elif is_state("input_select.sonos_outputdevice", "Julia") %}
              media_player.julia
            {%-elif is_state("input_select.sonos_outputdevice", "Living Room") %}
              media_player.living_room
            {%-elif is_state("input_select.sonos_outputdevice", "Mam en Pap") %}
              media_player.mam_en_pap            
            {% else %}
              none
            {% endif %}

and the script itselt in scripts.yaml (because I broke it out of the configuration.yaml). Note the tts.google.say as service.

  sonos_say:
    alias: "Sonos TTS script"
    sequence:
     - service: media_player.sonos_snapshot
       data_template:
         entity_id: "{{ sonos_entity }}"
     - service: media_player.sonos_unjoin
       data_template:
         entity_id: "{{ sonos_entity }}"
     - service: media_player.volume_set
       data_template:
         entity_id: "{{ sonos_entity }}"
         volume_level: "{{ volume }}"
     - service: tts.google_say
       data_template:
         entity_id: "{{ sonos_entity }}"
         message: "{{ message }}"
     - delay: "{{ delay }}"
     - service: media_player.sonos_restore
       data_template:
         entity_id: "{{ sonos_entity }}"

Hope this helps!!!

Thanks - looked through it now. A couple of questions that you might answer:

  1. Where, in which directory (in /config?), do you store the code script that starts with
script:
  sonos_say:
    alias: "Sonos TTS script"
  1. What is the name of the file? Is it called script.yaml?

Saw this now. Thanks - will test it now!

ok, just to clarify (although I get that you have it figured out). I have broken out scripts.yaml from configuration.yaml.

In configuration.yaml i have a line script: !include scripts.yaml so my scripts are in that file in the /config directory in a file scripts.yaml. If you do not have it, you could directly put it in the configuration.yaml under heading scripts: . If you did break it out like me, then pay attention to the spaces. All lines in scripts.yaml have to space out 2 spaces like if it would be part of the configuration.yaml.

Tried it yesterday. Did manage to get it to work for 1 room = Living room. After some troubleshooting and rebooting I managed to get 2 rooms working (living room and bathroom, but not kitchen). Not sure why really. I have checked the code and indentation numerous times.

I also tried setting up a new room called “All” and then use media_player.all to play in all rooms. Is that the correct syntax?

Code used:

input_select:
  sonos_outputdevice:
    name: Sonos output device voice messages
    options:
      - Kitchen
      - Living room
      - Bathroom
      - All
    initial: Bathroom
  sonos_message:
    name: message to say
    options:
      - Good night
      - Dinner time
      - Intruder
      - Custom other
    initial: Dinner time

Good! that is at least progress.
Couple of things you should be aware of: note that the names in the input selects really don’t matter. It is in the code of the automation that the selected option from the input select is “translated”. Example:

By selecting the option “Intruder” in the input select, in the automation, below piece, it determines which text to say. It is therefore very important that the name of the selections from the input_select options is EXACTLY the same as the option below:

   {%-elif is_state("input_select.sonos_message", "Intruder") %}
      'You are an intruder! You are not welcome here! Police is alarmed!'

Same logic is used for which sonos to use.I could have called the selection completely different from the true media player to choose, but they are similar. So below if the option is Dining Room, then it will play on the media_player.dining_room

    {% if is_state("input_select.sonos_outputdevice", "Dining Room") %}
      media_player.dining_room

Now to answer your last question… to play on ALL devices. The first thing you need to do is create a group:

group:
  sonos:
    name: All Sonos devices
    entities:
      - media_player.dining_room
      - media_player.living_room
      - media_player.julia
      - media_player.mam_en_pap      

After that is done, and you have the option for all in your input select, your automation code should have the below option in there:

        {%-elif is_state("input_select.sonos_outputdevice", "all") %}
          group.sonos 

Hope this helps deciphering how the code works. If you still have issues, please post the input selects and the automation code so I can have a look. I have to admit, I have not tried the group option to play on all devices at the same time, but would expect it to work as above (will try now myself :slight_smile: )

Just completed the test. works as above with the group and it plays on all media players in the group.

Sounds fantastic. I’ll test now!

Yes, it works now! Thanks a lot Piggyback!
Next plan is to get this automation to trigger when I arrive at home. How did you do that?

GREAT!
Now to trigger the automation when you arrive home, you need to have something that indicates you are at home.
There are several options:

  • Through device tracking (i use Life360), but they trigger when you are close to home, not exactly at home, so might play the text on the speaker before you are home.
  • ping. Will ping your device every x seconds and if you are connected it will set the state accordingly.

Hope this helps!

Wow that actually worked on the first try. How do you use PING when you have several hosts?
In my example I used:

  trigger:
    - platform: state
      entity_id: binary_sensor.ping_binary_sensor
      to: 'on'
  action:
    - service: light.turn_on
      entity_id:
        - light.tradfri_bulb_3

However, I know only have one host. I understand that if I add hosts under Ping, then it will be true if any of the hosts are home and responds to pings. Is there any command to ask if specifically this host is home?

You can specify multiple IP addresses as:

binary_sensor:
  - platform: ping
    name: ikeatradfri
    host: 192.168.0.1
  - platform: ping
    name: iungo
    host: 192.168.0.2

Took this from a discussion on one of the pages. Each of them would then become a separate binary_sensor.

Aha that sounds excellent. I did get an error though on my config and I cannot seem to find how I should go from binary binary_sensor.ping_binary_sensor to the new declared name? Would that be binary_sensor.ping_declared_name?

After troubleshooting this a bit I saw that others in this forum are referring to the PING document and that it does include name declaration. Have you gotten this to work?

Not sure what you did, but I just added a second ping to my config to see:

  - platform: ping
    host: 192.168.1.70
    scan_interval: 30
    name: Samsung TV Status
  - platform: ping
    host: 192.168.1.39
    scan_interval: 30
    name: Nintendo Switch Status

and the result is that I see both in the developer status
image

and can pull them in the Lovelace frontend:
image

aha - so the code command to be used in the automation in your example would be binary_sensor.ping_samsung_tv_status. Is that correct?

In an automation, you could indeed use that as a trigger when the tv turns on. Exanple:

  trigger:
    - platform: state
      entity_id: binary_sensor.samsung_tv_status
      to: 'on'
  action:
    - service: light.turn_on
      entity_id:
        - light.tradfri_bulb_3

I’ve been playing with this today and it’s pretty awesome. I’ve noticed something unfortunate: when I play back the TTS alert on a single speaker (using the media_player.office as the entity) the original playlist gets restored after a few seconds; if I create the All Sonos Devices group and use that as the entity none of the speakers reset to their original playlist after making the alert announcement. :frowning:

Is there a way to get that to work?