Script to resume Google Cast devices after they have been interrupted by any action

Any ideas about the problem?
I was reading again all the installation procedure, just to be sure that from my side I have already done everything.
So, if the problem is only for Spotify, probably there is something wrong in my Spotify configuration.
I start playing a music using Spotify on my iphone…on the HA dashboard I can see which music is playing, so I assume that the Spotify configuration is ok, isn’t it?

Looking at the procedure written by you, the other thing that could generate problems for not resuming, is the spotcat integration. How can I check, if it is correctly integrated?

Please take a look here: Imgur: The magic of the Internet
The primary spotcast has to match the name in the red rectangle, is it correct?

I think it’s a timing issue. The automation is string saving the data when the Spotify stream has already been interrupted by the service call to play the TTS message.

I could add a trigger to the automation so it triggers on the TTS service call instead of the media_player.play_media service call, but I kinda expect other timing issues then with the script getting triggered twice.

The most failsafe approach is to use the script directly like a posted above. Could you test it in Developer tools > services to see if Spotify resume works then?

service: script.turn_on
target:
entity_id: script.google_home_resume
data:
variables:
action:
- service: tts.google_translate_say
target:
entity_id: media_player.camerastudio
data:
message: This is a TTS message
extra:
volume: 0.8

In my case in order to work I need to change the service. If I use tts.cloud_say, it doesn’t work. It doesn’t say any message. I do not know if this can help and what you mean.

Anyway, if I run the modified script from the developer tools as you told, it worked…
At the time being, I can run your script only from the developer tool.

I started to learn Home assistant directly with node red, so frankly speaking I do not know any other way to make any kind of automatization. I was thinking to use google like some sort of voice advice…eg if the temperature become too lower, to tell to turn on the heating…or everything else…and with red node it is very comfortable to do this.

What can we do further to understand what the problem is if calling the service “tts.google_translate_say” with a red nod node?

There if not much more I can do about this.

tts.cloud_say was just an example I used. I didn’t know which TTS service you are using.

In my system the automation is quick enough to catch Spotify still playing, but it seems that might not be the case on all systems.

I have almost no experience with Node Red. If it is possible to use the service call to start the script in Node Red instead of TTS service call, you can give that a try. But I can’t help you with that.

The HA native automation GUI improved a lot in the last few releases, you could try if you can use that for the TTS automations

Hey man I have been running a lot of tests sadly more then I needed to, typed bad when editing group variables in script wasted so much time lol. But I finally got everything to work smooth by calling your script. I had to include the script_extra variable to set tts message volume and to make the tts message be synced across all speakers(with no music playing the tts message was not synced and some players would miss the message completely or catch just the end of it), just some information.

So before I came to my solution I have a pretty big message typed up with trace logs and junk if you would like me to submit that information just for you to have I will gladly submit it for ya and or run some different tests if YOU want to keep pursuing the issue I had. I am satisfied with the solution of calling your script. I really appreciate your work and all the effort you put in to help me and everyone else <33.

I do have one last question just out of curiosity does the resume volume function work for you when calling a tts without your script and no media is playing (speaker state is off) or is that a just me problem for some reason?

Thanks for letting me know. Just out of curiosity, what is the service call you ended up with?

I did test volume restore on a non playing speaker and it was working for me. Could you maybe test it on just one speaker which is not playing, and send me the traces afterwards?

Hello again,

I’ve made some more tests…and I found I give you a wrong information… When I told you that it worked using the above script while with node red not…it wasn’t the truth… the truth was that it was playing the music with TuneIn and not with Spotify, so when I started the tts message, it resumed from Tunein and not spotify. So, I can confirm you that while playing with spotify, it does not resume also running the script in the developer tool.

Please send a trace of that test as well. Although I might already have an idea what’s wrong.

Here it is :

google home resume - Pastebin.com
google home resume helper - Pastebin.com

"error": "Could not find device with name Google Milena"

Are the names of the Google Cast speakers the same as in the Google Home app? In other words, do you have a speaker named Google Milena in the Google Home app?

Looking at the data in the script, it looks like the name in the Google Home app is Ufficio speaker

I have made some changes in the entity name in order to be more “simpler”….so I will check better…
Anyway, thank you very much…

I have a couple of questions….

  1. to avoid boring you with such questions….could you let me know how did you find that? So next time, i will do some sort of own debug to find eventually an error…
  2. why it occurs only with spotify and not also with tunein?
  1. I found it in the trace, line 6 shows it was ended because of an error, line 13 shows the error.
    It should also be shown in the GUI, in the tab trace timeline
  2. It is not an issue for TuneIn because there you simply have a url with the stream which needs to be sent to the speaker. Spotcast uses the Spotify API, and Spotify is not aware of Home Assistant at all, so it uses the device name (as shown in the Google Home app) to determine to which device the stream should be sent. Spotcast on the other hand doesn’t know the name of the device in the Google Home app, so it uses the name you gave to the entity_id in Home Assistant. So therefore it is important that the name of the entity matches with the name in Google Home

Where do you load the files I sent you, so you can see trace and see this information?

Nowhere, I just read the text. The screenshot was an example from my own configuration

I open also the file to check,.yes, I found also the errors.

When I give you the tracing files after calling the tts.google.translate.say service using node red, you didn’t find anything, is it correct?
Is there a way to put the execution of the script in debug, so to look what happen during the execution?
If the script has been started, it means that it catches the event generated by node red, is it correct?
If yes, we have to check if the variables that the script stores for resuming after (I do not know how it works your script, by I assume that some information has to be stored), if they take the right information or not and what they take and so on.

Is there any way to do something similar?

All that information is in the trace. In the one you sent 3 days ago the information provided by the automation is included:

            "player_data": [
              {
                "data_source": "resume_script",
                "entity_id": "media_player.camerastudio",
                "state": "idle",
                "volume_level": 0.4000000059604645
              },
              {
                "data_source": "resume_script",
                "entity_id": "media_player.cucinanest",
                "state": "off"
              },
              {
                "data_source": "resume_script",
                "entity_id": "media_player.salottomini",
                "state": "idle",
                "volume_level": 0.4000000059604645
              },
              {
                "data_source": "resume_script",
                "entity_id": "media_player.sm_t830_204",
                "state": "unavailable"
              }
            ]

There you can see that all players were either idle, off or unavailable
So for the script there is nothing to resume.

As you are saying that Spotify was playing when you sent the TTS, my assumption is that the automation is too late to store the active state of the player. It seems it was already interrupted by the TTS, but the TTS was not playing yet.

Here’s the code for the service call I wasn’t 100% sure what you were asking for figure that answers it tho.

service: script.google_home_resume
data:
  action:
    - service: media_player.play_media
      target:
        entity_id: media_player.all_speakers
      data:
        media_content_id: media-source://tts/google_translate?message=You+got+mail%21
        media_content_type: provider
        script_extra:
          volume: 0.6
      metadata:
        title: You got mail!
        thumbnail: https://brands.home-assistant.io/_/google_translate/logo.png
        media_class: app
        children_media_class: null
        navigateIds:
          - {}
          - media_content_type: app
            media_content_id: media-source://tts
          - media_content_type: provider
            media_content_id: media-source://tts/google_translate?message=You+got+mail%21

And here’s a trace for just one speaker calling a tts.google_translate_say. Volume returned to its previous state so probs not much use likely looks the same as yours.
script dpaste/oHcgf (YAML)
helper dpaste/2m28C (YAML)

Hello,

I am going to do further test, I would like to find a solution to this situation and understand better why it does not work.
Today I execute this script:
service: script.turn_on
target:
entity_id: script.google_home_resume
data:
variables:
action:
- service: tts.google_translate_say
target:
entity_id: media_player.camerastudio
data:
message: This is a TTS message
extra:
volume: 0.9

Unfortunately, it did not resume.

I do not know how your script work, but this is what I found inside the resume script:

“player_data”: [
{
“data_source”: “resume_script”,
“entity_id”: “media_player.camerastudio”,
“state”: “playing”,
“app_name”: “Spotify”,
“volume_level”: 0.4000000059604645,
“media_content_id”: “spotify:track:3vjaynqkptcF4eLYpK6rnd”,
“media_position”: 49.86400005722046,
“members”: [],
“type”: “no screen”,
“spotcast”: “primary_account”
},

While in the helper it is empty, I do not know if it is normal.

“start_time”: “2022-11-18 10:43:31.707087+02:00”,
“player”: {
“data_source”: “resume_script”,
“entity_id”: “media_player.camerastudio”,
“state”: “idle”,
“volume_level”: 0.44999998807907104,
“members”: [],
“type”: “no screen”

These are the completed traces:
google home resume helper - Pastebin.com
Google home resume - Pastebin.com

Could you check what is happening?

I have also another question: the scripts you wrote, what language are they?

The script, which is starting from the developer tools, first calls the script script.google_home_resume on which it passes the service call to google tts.google_translate_say.
Probably it could be done the same thing from the red node nodes, on which I am calling only tts.google_translate_say service without the script script.google_home_resume…This is why I am asking you the above information for the script language.

Sorry for the stupid question, but in my previous post I was trying to post the information in the same way you did it. But I do not know how to do it. What did you press on the interface so you can show the code in the way you post it? Having the possibility to copy also the text inside in the upper right corner?

The trace from this helper script is not related to the trace of the main script.
The timestamp on the helper script is 10 minutes before the timestamp on the main script.
As the main script is started first, they can not be related.

In this case, the main script seems to have the correct data, Spotify was actually playing according to the data, so the helper script should have been started to resume the stream.

However, this time there was an error in the main script, one I’ve never seen before: "Chromecast 10.10.10.200:8009 is connecting..."

According to the trace it was the result of the tts.google_translate_say service call, and it looks like your receiving device was not ready to receive the TTS.

      "sequence/17/repeat/sequence/5": [
        {
          "path": "sequence/17/repeat/sequence/5",
          "timestamp": "2022-11-18T08:53:21.097602+00:00",
          "child_id": {
            "domain": "automation",
            "item_id": "417395bc-bd64-40a3-b20a-9062d426a01f",
            "run_id": "a1b9672bb43bbcf7f11737c8d5a75cea"
          },
          "error": "Chromecast 10.10.10.200:8009 is connecting...",
          "result": {
            "params": {
              "domain": "tts",
              "service": "google_translate_say",
              "service_data": {
                "message": "This is a TTS message",
                "entity_id": [
                  "media_player.camerastudio"
                ]
              },
              "target": {
                "entity_id": [
                  "media_player.camerastudio"
                ]
              }
            },
            "running_script": false,
            "limit": 10
          }
        }
      ]

The script is basically formatted in yaml, and the templates (the partsbetween {{ and }} or {% and %}) are written in jinja2, which is the templating language used in Home Assistant.

To format text as code, you use backticks, optionally you can add the language, so like this:

image
results in:

your code here

As far as I know, you can issue service calls in node red and add the service data to it. So as far as I know you can replicate that service call in your Node Red flow.