I started experimenting with LLM Conversation Agents in Home Assistant recently and fell in love with the Extended OpenAI Conversation custom component. However, I quickly found myself trying to cram too much into a single agent and decided to try and implement multiple specialized agents instead.
I manage this by starting with a Dispatcher Agent that is used by the voice pipeline. This agent lets me maintain a single point of contact by deciding which specialized agent to pass the query onto, and returning a result based on the agentās response.
I then set up separate agents for each specialized task. To be clear, an āagentā is an implementation of the Extended OpenAI Conversation integration. I add a new integration for each agent. This lets me customize the model, prompt template, and functions of each integration/agent.
The Dispatcher Agent has the following prompt template:
I want you to act as smart AI manager. You take user queries and pass them onto the appropriate AI agent to process. You provide responses based on what these agents tell you.
Select the ID of the relevant agent when using the call_agent_by_id tool:
Agent, agent_id
Meteorologist Agent, ABC123
Smart Home Agent, ABC124
To Do List Agent, ABC125
And the following function:
- spec:
name: call_agent_by_id
description: Pass user query to relevant AI Agent
parameters:
type: object
properties:
query:
type: string
description: The users query
agent_id:
type: string
description: ID of the AI Agent
enum:
- ABC123
- ABC124
- ABC125
required:
- query
- agent_id
function:
type: composite
sequence:
- type: script
sequence:
- service: conversation.process
data:
text: "{{ query }}"
agent_id: "{{ agent_id }}"
response_variable: _function_result
response_variable: res
- type: template
value_template: >-
{% set res = res.response.speech.plain.speech %}
{{ {'agent_response': res} }}
One of the big benefits of this method is that you can customize what data is passed to the agent in its prompt template. That lets you avoid having to expose your whole home to it and spam it with needless info. I take advantage of this with my Meteorologist Agent. Its prompt template looks like this:
I want you to act as a meteorologist. Provide brief, one or two sentence responses. You know the following about the current conditions and upcoming forecast.
The current time is: {{now()}}
The current conditions are: {{ states.sensor.mycity_condition.state }}{% if states.sensor.mycity_warnings.state|int > 0 %}
There is a weather warning in effect: {{ state_attr("sensor.mycity_warnings","alert_1") }}
{% endif %}{% if not states('mycity_chance_of_precip') == 'unknown' %}
The chance of preciptation is: {{ states.sensor.mycity_chance_of_precip.state }}{% endif %}
The current weather data is:
Property, Value
Temperature, {{ states.sensor.mycity_temperature.state }}
Humidity, {{ states.sensor.mycity_humidity.state }}
Humidex, {{ states.sensor.mycity_humidex.state }}
Wind Gust, {{ states.sensor.mycity_wind_gust.state }}
Wind Speed, {{ states.sensor.mycity_wind_speed.state }}
UV Index, {{ states.sensor.mycity_uv_index.state }}
I also gave it functions simply because of how the weather
entity and the get_forecast
service work:
- spec:
name: get_hourly_forecast
description: Get an hourly weather forecast
function:
type: script
sequence:
- service: weather.get_forecasts
metadata: {}
data:
type: hourly
target:
entity_id: weather.mycity
response_variable: _function_result
- spec:
name: get_daily_forecast
description: Get a daily weather forecast
function:
type: script
sequence:
- service: weather.get_forecasts
metadata: {}
data:
type: daily
target:
entity_id: weather.mycity
response_variable: _function_result
I think the rest is self explanatory from here. My Smart Home Agent is just the default config for the Extended OpenAI Conversation integration. The To Do List Agent is just the functions from the shopping_list example.