Node-RED is quite separate to Home Assistant, so it is a learning curve for both Home Assistant and for Node-RED.
You will not always find tutorials on the internet, as not many people use both Home Assistant and Node-RED, and few use it for something very specific like this.
Also this is a little complicated, since there are different settings depending on the exact TTS action (service) you are using. It confuses even those of us who pretend that we understand.
With any Action, it is important that you can run the Action in Home Assistant first. If it does not work there, it will never work in Node-RED.
As you are using action
text-to-speech.speak (tts.speak)
you will need to set a target service, as well as the message, and the language. This is important for the correct settings in Node-RED. If any setting is wrong, it will not work.
When you move to Node-RED, you use nodes. There is no file.yaml. The settings are done in the node UI editor.
In Node-RED
- you need the WebSocket nodes (the nice blue ones)
- you need to have the WebSocket HA server configured correct (this connects to Home Assistant from Node-RED)
I am going to assume that you have Node-RED as an add-on in Home Assistant and that your Home Assistant server configuration node is correctly set up and is working.
Then, in the Node-RED editor
add one Inject node (so you can manually start a flow)
add one Action node (so you can call the tts action in Home Assistant)
Here is the setting for the Action node
- I have renamed my HA server, but your node should already show ‘Homeassistant’
- the Action you are using is
tts.speak
(parler)
- this action requires a ‘target’, and you are using ‘Google translate fr’ so you need to add an entity and select the correct one from the list
- then you need to add an object in the Data field with all the options
The Data object requires all the settings for the message and language. This is very similar to the YAML used by Home Assistant, but must be in JSON.
The settings you require are
{
"cache": true,
"media_player_entity_id": "media_player.kitchen_display",
"message": "test de bonne diffusion",
"language": "fr"
}
which gives you the correct media player you want (change for your one), and the message, and the language. Cache will hold the generated audio file for re-use.
I have tested this using my settings, and it fonctionne très bien for me.
Once you have this working, you can change the Data object (as long as you keep the J: option for JSONata) and use
{
"cache": true,
"media_player_entity_id": "media_player.kitchen_display",
"message": payload,
"language": "fr"
}
which will accept an input message with the standard msg.payload set to a message string. Then your Action node will speak any message you send to it.
I hope this helps you to get started. Bonne chance