đź—Ł Custom Integration: OpenAI TTS with custom endpoint support + chime option + audio normalization

The OpenAI TTS component for Home Assistant makes it possible to use the OpenAI API to generate spoken audio from text. This can be used in automations, assistants, scripts, or any other component that supports TTS within Home Assistant.


Features

  • Text-to-Speech conversion using OpenAI’s API

  • Support for multiple languages and voices – No special configuration needed; the AI model auto-recognizes the language.

  • Customizable speech model – Check supported voices and models.

  • Integration with Home Assistant – Works seamlessly with assistants, automations, and scripts.

  • Custom endpoint option – Allows you to use your own OpenAI compatible API endpoint.

  • :star:(New!) Chime option – Useful for announcements on speakers. (See Devices → OpenAI TTS → CONFIGURE button)

  • :star:(New!) User-configurable chime sounds – Drop your own chime sound into config/custom_components/openai_tts/chime folder (MP3).

  • :star:(New!) Audio normalization option – Uses more CPU but improves audio clarity on mobile phones and small speakers. (See Devices → OpenAI TTS → CONFIGURE button)

Caution! You need an OpenAI API key and some balance available in your OpenAI account!

visit: (https://platform.openai.com/docs/pricing)

YouTube voice demo ( not a tutorial! )


HACS installation ( preferred! )

  • Go to the sidebar HACS menu

  • Click on the 3-dot overflow menu in the upper right and select the “Custom Repositories” item.

  • Copy/paste https://github.com/sfortis/openai_tts into the “Repository” textbox and select “Integration” for the category entry.

  • Click on “Add” to add the custom repository.

  • You can then click on the “OpenAI TTS Speech Services” repository entry and download it. Restart Home Assistant to apply the component.

  • Add the integration via UI, provide API key and select required model and voice. Multiple instances may be configured.

Manual installation

  • Ensure you have a custom_components folder within your Home Assistant configuration directory.

  • Inside the custom_components folder, create a new folder named openai_tts.

  • Place the repo files inside openai_tts folder.

  • Restart Home Assistant

  • Add the integration via UI, provide API key and select required model and voice. Multiple instances may be configured.


Repo → GitHub - sfortis/openai_tts: Custom TTS component for Home Assistant. Utilizes the OpenAI speech engine or any compatible endpoint to deliver high-quality speech. Optionally offers chime and audio normalization features.

2 Likes

How can I provide it with “instructions”—which the latest models now offer. For example, “speak in a whisper”, etc. I’m assuming that’ not possible without some underlying code changes in the integration. FYI, the gpt-4o-mini-tts model is working for me w/the integration.

For example:

response = client.audio.speech.create(
  model="gpt-4o-mini-tts",
  voice="coral",
  input="Today is a wonderful day to build something people love!",
  instructions="Speak in a cheerful and positive tone.",
)

PS, I’m using this with ChimeTTS so not sure which integration w/should get the “instructions” parameter. ChimeTTS would make sense but I’ll take it wherever I can get it.

@bluespice The new GPT-4o-mini-TTS model and instructions configuration has been added in the latest version.

2 Likes