TTS with Sonos

Does your Sonos resume after the TTS using the node-sonos-http-api ?

I have set mine up (via docker) and i cannot for the life of me get it to resume after playing the TTS

Yes, I’m using the api call for “saypreset”. Once the TTS is done, whatever was playing will usually resume.

How did you create the preset?

You don’t, it’s part of the sonos-http-api. You just need to call it, as I indicated in the Sept 1 post.

I run NodeJS server with the sonos-http-api installed. Here is some some info on the api.

https://github.com/jishi/node-sonos-http-api

Hope this helps!

the presets are configured in your presets folder, you would have home and home-all based on your config above.

could you post those?

for example, here is one i created

  "players": [
    {
      "roomName": "Office Sonos",
      "volume": 65
    }
  ],
    "playMode": {
    "repeat": "false",
    "crossfade": false
  }
}

Dave

Oh, I forgot about those. Yes, I have many presets for different scenarios. Here are two.

home.json

{
  "players": [
    {"roomName": "Kitchen", "volume": 60},
    {"roomName": "Lower Landing", "volume": 60},
    {"roomName": "Living Room", "volume": 60},
    {"roomName": "Loft", "volume": 60}
   ]
}

home_night.json

{
  "players": [
    {"roomName": "Kitchen", "volume": 40},
    {"roomName": "Lower Landing", "volume": 40},
    {"roomName": "Living Room", "volume": 40},
    {"roomName": "Loft", "volume": 40}
   ]
}

I had changed the format a bit to control the group_mode in the automation, so my automation now looks like this:

data_template:
  group_mode: |
    {% if is_state("switch.broadcast_to_pool", "off") and 
          is_state('sun.sun','above_horizon') -%}
       home
    {%- elif is_state("switch.broadcast_to_pool", "off") and 
             is_state('sun.sun','below_horizon') -%}
        home_night
    {%- elif is_state("switch.broadcast_to_pool", "on") and 
             is_state('sun.sun','above_horizon') -%}
        all
    {%- elif is_state("switch.broadcast_to_pool", "on") and 
             is_state('sun.sun','below_horizon') -%}
        all_night
    {%- endif %}
  message: '{{ trigger.to_state.attributes.friendly_name }} is open'
service: rest_command.sonos_bcast

sonos_bcast looks like this:

# Rest commands
rest_command:
  sonos_play_clip:
    url: http://10.1.1.2:5005/clippreset/{{ group_mode }}/{{ clip_name }}
    method: GET

  sonos_bcast:
    url: http://10.1.1.2:5005/saypreset/{{ group_mode }}/{{ message }}
    method: GET

You can see I do the same thing for audio clips, like custom doorbells and alarms.

So there isnt anything in the preset that is unique that would allow seem to indicate that the music would resume.

How are you starting whatever is playing, is it from Spotify/Apple Music or started from within the Sonos app / triggered by connected tv?

The integration I mentioned above is just installed by the add on store on supervisor. It’ll not stop the current playing music, but will (just like your assistant) lower the volume of the music, say it’s messagr, and turn the volume up again). It uses the Sonos API for it, before use, watch if your Sonos speaker is supported.

If i’m not mistaken, this is cloud based and if internet is lost, so is this integration. Is that correct?

I didn’t test it without the internet, but in the documentation it says that it relies on the cloud. So I guess you’re right, but it’s not a big deal. When I’ve no internet connection, I’m not sure if my Sonos itself will work either actually. At least things like travel time and stuff won’t make it through as well, so I’m not that afraid about that cloud part for my Sonos.

Not sure if this is the right place for this, but if helps anyone I’ve use the following script to play something to the Sonos (in the example below, it allows my wife to ask for a cup of tea via a z-wave trigger :slight_smile: )

The script pauses the Sonos, makes a “doorbell” sound (so I know there’s an incoming message), reads the message, then restarts whatever the Sonos was playing.

I’ve used the same template for a number of automation’s now, and it works flawlessly on our speakers (specifically 2 x IKEA Symfonisk lamps)

cup_of_tea:
  alias: 'Cup Of Tea'
  sequence:
    - service: sonos.snapshot
      data:
        entity_id: media_player.living_room
    - service: sonos.unjoin
      data:
        entity_id: media_player.living_room
    - service: media_player.volume_set
      target:
        entity_id: media_player.living_room
      data:
        volume_level: 0.5
    - service: media_player.play_media
      data:
        entity_id: media_player.living_room
        media_content_id: http://<ServerIP>:<ServerPort>/local/doorbell.mp3
        media_content_type: music
    - delay:
        hours: 0
        minutes: 0
        seconds: 2
        milliseconds: 50
    - service: tts.google_say
      data:
        entity_id: media_player.living_room
        message: Please can I have a cup of Tea. Thank You.
        language: en-uk
    - delay:
        hours: 0
        minutes: 0
        seconds: 5
        milliseconds: 0
    - service: sonos.restore
      data:
        entity_id: media_player.living_room

Couple of things to note

  1. I’m not using any additional integrations aside from the out of the box Sonos one
  2. I’m running HA OS (core-2021.3.3 as at time of writing), on an Intel NUC
  3. The audio files are physically located in /config/www - I tried using both an HA media directory and another subfolder in /config, but Sonos wouldn’t have any of it. Seems it will only read audio files from within the www folder.
  4. The media_content_type will only work with type “music” (and not the more usual “audio/mp3”).

Hope this helps !

Hello, can you share your flow to do this, I am a beginner and am having issues. But I just want to send to node sonos the name of the entity and the state.

Hello @p4mr this is an amazing looking script, and is exactly what I am wanting to implement. Only issue… I added your script to an empty automation and I get the following error…

Message malformed: extra keys not allowed @ data['sequence']

Have tried a number of different approaches, but alas it doesn’t work. Really new to the TTS and Sonos, but I generally work things out… just need a nudge in the right direction.

It’s a script, not an automation. They are different (although they look similar).

1 Like

Yes, have done a ton of research since my post and worked that out. I assume the script is entered via the ‘manual’ option from lovelace, so will go down that road. Like I said… just needed a nudge in the right direction.

Hi @p4mr… I finally got around to using your script, and it’s asking me to input the card type. Is there a generic card type I use for a manual script? Apology for the random question.

I assume this is when you’re trying to add a “card” to the Lovelace dashboard ? (I can’t think of anywhere else where a card is mentioned, so a bit confused here)

If that’s what you mean, then edit the Overview page, add a card, select the entity tab, choose the script, then Lovelace will offer you a suggestion. You should be able to use whatever it suggests, and it will work.

Update… managed to get it working… freaked the wife out… so it is awesome :+1: :slight_smile:

1 Like

Want to share my script for TTS with piper and sonos

How it works:

  • triggered by an automation like this
          - service: script.turn_on
            entity_id: script.jarvis_speak
            data:
              variables:
                mymessage: |
                  {{ state_attr('sensor.notification_message', 'msg') }}
                myplayer: media_player.sonos_bedroom

  • the script is using an input_boolean to flip so that one announcement is done at a time, that’s why I have the jarvis_announcement_wait boolean
  • it uses input_number.jarvis_<<room>>_volume to be able to adjust the volume per room for the announcements
  • it uses a little “ding” sound prior to the announcement
  • it uses piper and with trial end error I figured out how many word per minute “ryan” is speaking so that the script ends exactly at the time when the tts announcement has finished in case there are multiple announcements to be made per room
alias: jarvis_speak
sequence:
	  - repeat:
		  while:
			- condition: state
			  entity_id: input_boolean.jarvis_announcement_wait
			  state: "on"
		  sequence:
			- delay:
				hours: 0
				minutes: 0
				seconds: 1
				milliseconds: 0
	  - service: input_boolean.turn_on
		target:
		  entity_id:
			- input_boolean.jarvis_announcement_wait
		data: {}
      - service: media_player.play_media
        data:
          media_content_id: /local/jarvis-chime.wav
          media_content_type: music
          announce: true
          extra:
            volume: >-
              {{ states('input_number.jarvis_' + myplayer |
              replace('media_player.','') + '_volume')  }}
        target:
          entity_id: "{{ myplayer }}"
      - delay:
          seconds: 1
      - service: media_player.play_media
        data:
          media_content_type: music
          announce: true
          media_content_id: >-
            media-source://tts/tts.piper?message={{ mymessage  | replace('&',
            'and') }}
          extra:
            volume: >-
              {{ states('input_number.jarvis_' + myplayer |
              replace('media_player.','') + '_volume')  }}
        target:
          entity_id: "{{ myplayer }}"
      - delay:
          seconds: >
            {% set text = mymessage | replace('&', 'and') %} {{ (text.split(' ')
            | length * 60 / 150) | round(0, 'ceil') }}
	  - service: input_boolean.turn_off
		target:
		  entity_id:
			- input_boolean.jarvis_announcement_wait
		data: {}
mode: queued
max: 10