I can at least try…
I have been using HA for about 2 years, and I am finding the best way to learn about it is to try and answer questions on this forum - it forces me to try and better understand what is going on. If I can’t understand something I really can’t explain it!
So, a bit more exploring, I have found out lots of stuff I did not know, and I am (now) a little bit wiser.
HA provides services that can be called - actually it is the integrations that add entities and at the same time may also provide services that can be used to set / change some of their (the integration owned) entities.
Service calls therefore require the service to call, an entity (or an area containing associated entities, or a device with associated entities) and some data to pass to the call, which when called will do something to/with the entity (usually the state).
A brand new HA out of the box starts with no integrations - no entities, and therefore no service calls.
However, there are a few service calls that HA provides itself. These are the home assistant one (hassio, homeassistant, core etc), but there are also the ‘Building Block Integration’ integrations.
These ‘integrations’ are built into HA, but don’t have their own entities, just service calls. They are intended to be used by other integrations, which then build on top of them.
- todo (lists)
- tts (text to speech)
- weather
- vaccum
- time
- stt (speech to text)
and so on - now I look there are loads of them!
Each of these, in the documentation, has a note to the effect that it cannot be directly used
Building block integration
The light integration cannot be directly used. You cannot create your own light entities using this integration. This integration is a building block for other integrations to use, enabling them to create light entities for you.
There you have it. The building block integrations don’t have entities.
Interestingly, most of these “you can’t use” integrations have services but do not show up in the Developer > services test list. Which is quite sensible really - if you can’t use them why show them?
However - at least two do. The ‘todo’ list and the ‘tts’ show up.
So, here are two services that we can’t use. But we can. Just difficult different to use.
The reason that these ‘building block integrations’ are difficult to use, is that they do not work on entities. Just targets (and the target is supposed to be another integration that builds on them with its own entities).
If you want to use a service in Node-RED, a good place to start is to test it out in Home Assistant first!
So here is the tts:speak service - yes I can select it, but hey it is asking for a ‘target’!
So, what is the ‘target’? Turns out that this is another integration - in my case here I had added the Google Translate Text-to-speech integration, which adds just the one entity, which I can select here as the ‘target’.
So what I think is happening is, the building block integration text-to-speech provides basic services for doing the HA work, but requires another integration to do the talking. tts:speak takes the message, and passes it to a ‘media integration’, and this integration passes the audio to the ‘media-player’ entity that the ‘media integration’ knows about.
Here is the tts:speak, using the tts.google_en_co_uk integration as the target, and setting the ‘media_player.kitchen_display’ as the thing to speak on inside the data object.
The important things to note here are
- you need two integrations to make this work. One for the target, and one for the media devices
- the media device is set in the data object, and is passed to the target integration, and therefore you set the media here
How to use more than one media device? Use an array!
service: tts.speak
target:
entity_id:
- tts.google_en_co_uk
- tts.google_uk_co_uk
data:
cache: false
media_player_entity_id: [media_player.bedroom_display,media_player.kitchen_display]
message: This is my message folkes!
I have experimented and added two entities under the Google Translate - Text to Speech integration, and yes you can have multiple targets too (it just does not seem to work with both, just the last one in the list).
Here are my integrations - you can see I have the Google Cast (which is the one that produces the media_player entities) and the Google Translate text-to-speech integration, which has two entities. These entities are the targets for the tts:speak service call!
So back to Node-RED
Using the call service node requires the service domain and service to start with.
In this case, we are using tts: for the domain
If you use ‘speak’ then the entity must be the target, that is the Google Translate entity, and the media player (one or array) has to be in the data field
What happens here is, the service call is to the Building Block tts:speak, which must have the Google Translate entity as its target, and the media device(s) to play on go into the data field as shown.
The service call calls tts:speak, this calls google_uk_co_uk, and passes the (list of) media players, which then do the speaking.
This is why you need to add the media player in the data field, and if you want a list of them, use an array
{
"message":"My name is hanna",
"media_player_entity_id": [
"media_player.bedroom_display",
"media_player.kitchen_display"
]
But why can’t I have the media list in the entity list as I want?
Since we are in effect calling a service call to call a service that then calls another service, this is not going to fit into the call service node as you would expect. This is ‘service call forwarding’ (I don’t know if that is a technical term, I just made it up).
Now, the Google Translate text-to-speech integration adds its own service. This shows up as a new service
Text-to-speech (TTS): Say a TTS message with google_translate or tts:google_translate_say
in the list. What this service does is offer the ability to speak, but using the building block tts (so backward call service if you like).
Here is the same attempt at calling a speaking service - this time I am using the google_translate_say (which knows about and uses the tts:speak) and I can now enter the media players as the entities, in a list, as expected.
This works, but as you kindly pointed out, google_translate_say is now legacy, and we should use tts:speak, with the Google Translate entity as the target.
Conclusion:
Use tts.speak
Set up something like Google Translate (to get the target entity)
Call the service, using the Google Translate entity (target) as ‘entity’
Put the (list of) media players into the data object under media_player_entity_id
Well, I’m glad we managed to clear that one up, although I think the documentation on tts should be updated to stop saying it cannot be used directly!