DeepSeek integration

hello , you are trying to use deepseek api keys with openai url ?

I didnā€™t find a way to change the url in the fork. But the original component now has it, so Iā€™m using that

1 Like

Thatā€™s it, it should work now.
For some reason home assistant developers keep rejecting any PRs trying to add an easier option to switch the OpenAI endpoint in the official integration

  1. Install hass-environmental-variable
  2. Add this to your configuration.yaml:environment_variable: OPENAI_BASE_URL: ā€œhttps://api.deepseek.com/v1ā€
  3. Restart your system and add the OpenAI Conversation integration, when asked for the API key use the one you crated for DeepSeek
  4. Open the integration and uncheck ā€œRecommended model settingsā€
  5. Set ā€œmodelā€ to ā€œdeepseek-chatā€ and increase maximum tokens to 1024, then reload the integration
    taken from reddit
2 Likes


You may also host the DeepSeek-R1 model free at your Azure AI Foundry account and connect it using the Azure OpenAI Conversation github custom component from HACS store.

Important Configuration:

  1. The base URL must include the api version (You can check this by navigating to your model + endpoint section and select your deployed model) e.g. https://ai-xxx.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
  1. After finish setup the service, you must click the CONFIGURE button and uncheck the recommended model settings. Click submit and configure again you will be able to change the model name to your deployed DeepSeek model (e.g. DeepSeek-R1-home) .
  2. Then, go to Settings > Voice assistants . You can then add a new assistant and select the Azure OpenAI Conversation at the Conversation Agent dropdown. This is the default name, you can change it at its integration section.
  3. Happy chatting :smiley:
1 Like