Perplexity as an extension of Voice Assistant capabilities

Hello

Below manual is outdated, it’s easier to use: Perplexity as an extension of Voice Assistant capabilities - #9 by mulat
or: GitHub - Pekulll/Perplexity-Assistant: Perplexity Assistant brings the Perplexity conversational AI experience into Home Assistant.

—OUTDATED—

I would like to share my way of integrating with Perplexity.ai API. I’m using Voice Assistant with Gemini 2.0 Flash and it works well with HA management, unfortunately it lacks “live” knowledge.
I’m daily basis user of Perplexity.ai, I couldn’t find any HA integration so I decided to integrate it using proxy https://www.litellm.ai and GitHub - acon96/home-llm: A Home Assistant integration & Model to control your smart home using a Local LLM
A similar effect can be achieved using the REST API or by creating your own integration. Proxy litellm allows us to integrate more models, we can use free models from https://openrouter.ai. It’s also easy to manage multiple system prompts, for example if you want to build a secure content to talk with kids.

How is that works:

  • OK Nabu - wake up assistant
  • Ask Sonar “question”
  • “question” will be sent to Sonar API
  • response will go back to voice assistant

Configuration:

  1. litellm (provide your API key instead of ${PERPLEXITYAI_API_KEY}) - docker-compose.yaml:
services:
  litellm:
    container_name: litellm
    image: ghcr.io/berriai/litellm:main-latest
    ports:
      - "4000:4000"
    volumes:
      - /your_config_path/config.yaml:/app/config.yaml
    command: [ "--config", "/app/config.yaml", "--port", "4000"]
    environment:
      PERPLEXITYAI_API_KEY: ${PERPLEXITYAI_API_KEY}
  1. litellm - config.yaml
litellm_settings:
  drop_params: true 
model_list:
  - model_name: sonar
    litellm_params:
      model: perplexity/sonar
      api_key: "os.environ/PERPLEXITYAI_API_KEY"
  1. Install GitHub - acon96/home-llm: A Home Assistant integration & Model to control your smart home using a Local LLM
  2. Add new device Local LLM Conversation:
    a) Generic OpenAI Compatible API
    b) Provide IP (no http), port 4000, model sonar, default api key is sk-1234
    c) Next page: “No control”, unmark Enable in context learning (ICL) examples, Temperature 0,2, delete all custom attributes
    d) my system prompt for answering the questions (change language if you need)
You answer questions about the world in a concise and truthful manner. Use plaintext and Polish language. Responses should be limited to a few sentences. Remove citation brackets [1][2][3] from answers.  
Current time is {{ (as_timestamp(now()) | timestamp_custom("%H:%M", local=True)) }}, current date is {{ (as_timestamp(now()) | timestamp_custom("%Y-%m-%d", local=True)) }}. If the response includes a date from the current week, provide the day of the week.
  1. Time to create automation which will pass the questions to Sonar:
alias: Ask Sonar
description: ""
triggers:
  - trigger: conversation
    command:
      - Ask Sonar {pytanie}
conditions: []
actions:
  - action: conversation.process
    metadata: {}
    data:
      agent_id: conversation.llm_model_sonar_remote
      text: "{{ trigger.slots.pytanie }}"
    response_variable: odpowiedz
  - set_conversation_response: "{{ odpowiedz.response.speech.plain.speech }}"
mode: single

Result video is in Polish you need to trust me :sweat_smile::

PS. assistant hardware is build based on: GitHub - KristopherMackowiak/ha_voice_assistant: Home Assistant DYI voice assistant
PS2. case from: https://makerworld.com/models/1062881

17 Likes

Nice. Does Tools work?

What do you mean by Tools?

It would soo much simpler if there was a integration with HA. Just point and click :slight_smile:
Will investigate the RestAPi but the it’s a deal breaker for so many … (not to have a easy clickable way) :slight_smile:

Thank you for sharing :slight_smile:

I meant, like function calling. It is able to command HA?

Perplexity is not managing the home.
Box is powered by Gemini Flash 2.5 and it’s managing the home using Google AI. Provising temperature from sensor, turn on/off light etc…

1 Like

Hi!
Thanks for sharing, however some time ago I’ve decided to move entirely under HA so I do not have a docker engine. I do wonder if/how I might run LitleLLM as Add-on

I found a new way to configure Perplexity without LiteLLM.
Use this plugin: GitHub - jekalmin/extended_openai_conversation: Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.
Configure new service with:

  • LLM Provider: OpenAI
  • Base URL: https://api.perplexity.ai
  • API Key: your-api-key
  • model: sonar or sonar-pro
  • Choose the API to expose to the LLM: No control
  • Custom LLM API Agents: select only “Enable LLM Agent”
  • I’m using only instructions_prompt
  • other prompts can not be empty so I used “.”

This setup is for chat and it’s working. I did not test if sonar is able to control “Assist”.

Thank you. I was having problems with LiteLLM in the voice section. I checked this plugin, configured with Perplexity as agent and works both to control devices as for AI conversation agent, to differentiate between local and AI I set up in Home Assistant Voice two words, OK Nabu for local and Hey MyCroft for the AI.

1 Like

Does this method still work? Just generated an API key on perplexity, and I tried to configure the service but can’t seem to get it to work somehow. Do I need to specify more info?

Use my above instruction Perplexity as an extension of Voice Assistant capabilities - #9 by mulat

I tried to follow your instruction but I’m getting an error message when I try to chat with the assistant. Can you explain some of the steps like I’m 5? I’d really appreciate that. lol

First I had to set up an API group in perplexity in order to get an API key.

I installed the custom hacs integration extended_openai_conversation

When adding a new entry, i get the pic that I posted above.
API key is fine
Base url is fine
API version ? I’m guessing sonar or sonar-pro.
Do you check the ignore auth box?

After, I can configure the entry, theres a bunch of options

When I add a new assistant, sending any message result in an error.

Looks like something has changed in Perplexity. I get the following error from litellm:
litellm.exceptions.BadRequestError: litellm.BadRequestError: PerplexityException - After the (optional) system message(s), user or tool message(s) should alternate with assistant message(s).. Received Model Group=sonar

@baloo2 @Drew514 are you using new way of configuration Perplexity as an extension of Voice Assistant capabilities - #9 by mulat ?
I’m not able to edit main post.

what about this?

1 Like

Hi,
did someone getting it to work with Assistant?
extended_openai_conversation works fine for informational purposes. It can read my entities but always says it has switched on or off a device. Nothing happens. Perkulll/Perplexity-Assistant works one time but has a wrong status afterwards.
I.e: You switch on a light → works
You try to switch it off-> says “it is already off”
It has a refresh in the settings of 3600 seconds - perhaps that is the issue (no live updates).

Thanks,
Stefan

Edit: I think I got it but am stuck now: I used AI to give out the summary :wink:

I tested two different custom integrations for using Perplexity with Home Assistant: PerplexityAssistant and Extended OpenAI Conversation.

PerplexityAssistant

PerplexityAssistant can successfully call real Home Assistant services, so a command like “turn off Iris bedroom” does trigger light.turn_off on light.iris_schlafzimmer. However, there is a major limitation: it does not update or re-read the entity state after the first switch. That means:

  • First command works (e.g., light is turned off).
  • Because the integration doesn’t correctly sync or reconsider the new state, the AI model no longer has a reliable view of the current state.
  • As a result, it often refuses to perform the opposite action (e.g., turning the light back on) or makes wrong assumptions, since it thinks the state is already correct.

So PerplexityAssistant can switch once, but without proper state feedback it is not robust for repeated toggling or more complex stateful logic.

Extended OpenAI Conversation + Perplexity

Extended OpenAI Conversation is built around OpenAI-style function calling (tools / tool_calls). It expects the model to emit actual tool calls that Home Assistant can directly translate into service calls. Perplexity’s Sonar models (including sonar-pro) do not implement this function-calling protocol. They can return structured JSON / JSON-schema or even YAML snippets, but they do not produce true tool_calls in the way OpenAI’s function calling guide describes.

Because of that:

  • With Perplexity + Extended OpenAI Conversation you only see JSON/YAML suggestions in the assistant’s reply.
  • Home Assistant never treats these as executable tool calls, so no services are actually run, regardless of whether “Use Tools” is enabled.

Practical consequence

  • PerplexityAssistant:
    • Can execute a service, but lacks reliable state synchronization, causing problems after the first switch.
  • Extended OpenAI Conversation with Perplexity (Sonar / Sonar Pro):
    • Can’t perform real tool calling, since Perplexity’s API does not expose OpenAI-compatible tool_calls.
    • Works fine for Q&A and structured outputs, but not for automatic Home Assistant service execution.

Looks like PerplexityAssistant dramatically improved with this last release (few hours ago) and I’m being able to use chat, voice and control my home.

1 Like