Hey all,
Quick intro
I’m new to the home assistant community but I’ve been using HA for years.
I received the home assistant voice preview yesterday and started putting together some custom intents (custom responses). I found that there is not really a quick setup guide for intents or a lot of working examples so I thought I’d start this topic.
Apologies if a topic already exists - I did have a quick look but I didnt see any other similar topics.
Dropdown the above if you want to know my background.
First thing I’ve noticed is that custom responses are split into two parts. “intents” are sentances that the assistant will look for when you interact with it, so they’re basically triggers. “intent_script” are the actual customised responses or actions that your assistant will use.
First step, setup “intents” (custom triggers)
Create a custom_sentences folder in the config folder (where your configuration.yaml exists), then create a folder named the language you intend to use inside the “custom_sentences” folder e.g. en, de, pl. This is where you’ll store your custom intents yaml.
Create a yaml file with whatever name you want inside the language folder you created, you can have multiple files or just one big file, see example below.
Inside this custom file yaml that you created will be your custom intents - this is what your assistant will listen for to trigger your custom actions or response.
Here are some of my working “intent” examples:
language: "en"
intents:
YearOfVoice:
data:
- sentences:
- "how is the year of voice going"
SetVolumePercent:
data:
- sentences:
- "{media_player} (volume | vol.) [to] {volume_percent} (% | percent)"
- "(set|change) {media_player} (volume | vol.) to {volume_percent} (% | percent)"
- "(set|change) [the] (volume | vol.) (in | for) {media_player} to {volume_percent} (% | percent)"
SetVolumeStep:
data:
- sentences:
- "{media_player} (volume | vol.) [level] [to] {volume_step}"
- "(set|change) {media_player} (volume | vol.) [level] to {volume_step}"
- "(set|change) [the] (volume | vol.) [level] (in | for) {media_player} to {volume_step}"
CustomGetWeather:
data:
- sentences:
- "(what is | what's | whats) {ord_day} weather [going to be | forecast] [like]"
- "(what is | what's | whats) the weather [going to be | forecast] [like]"
- "(what is | what's | whats) the weather [looking | going to be | forecast] [like | for] {ord_day}"
- "(hows | how's | how is) {ord_day} weather [looking | going to be | forecast] [like]"
- "(hows | how's | how is) the weather [looking | going to be | forecast] [like]"
- "(hows | how's | how is) the weather [looking | going to be | forecast] [like | for] {ord_day}"
- "{ord_day} weather (report | forecast)"
CustomGetRain:
data:
- sentences:
- "is it (going to | gonna) rain"
- "(can it | could it | is it | will it) [going to | gonna] rain {ord_day}"
CustomGetSolarPower:
data:
- sentences:
- "(what is | what's | whats) the [current] {solar_query} [currently | right now | at the moment]"
- "how much [solar] power (are we | is being) (generating | generated | created | creating | made | making) [currently | right now | at the moment]"
CustomGetLoadPower:
data:
- sentences:
- "(what is | what's | whats) the [current] {load_query} [use | usage] [currently | right now | at the moment]"
- "how much [load] power (are we | is being) (using | used | drawing | drawed) [currently | right now | at the moment]"
CustomGetGridPower:
data:
- sentences:
- "(what is | what's | whats) the [current] {grid_query} (use | usage) [currently | right now | at the moment]"
- "how much grid power (are we | is being) (using | used | drawing | drawed) [currently | right now | at the moment]"
CustomGetBatteryStatus:
data:
- sentences:
- "(what is | what's | whats) the [current] {battery_query} [currently | right now | at the moment]"
- "(hows | how's | how is | how does) the {battery_query} [look | looking | doing | fairing | getting on | holding up] [currently | right now | at the moment]"
- "how much battery (charge | power) (is remaining | is left | remains) [currently | right now | at the moment]"
lists:
media_player:
values:
- in: "dan's room"
out: "media_player.dans_bedroom_voice_assistant"
- in: "dans room"
out: "media_player.dans_bedroom_voice_assistant"
volume_percent:
range:
from: 0
to: 100
volume_step:
range:
from: 0
to: 10
ord_day:
values:
- "today"
- "today's"
- "todays"
- "tomorrow"
- "tomorrow's"
- "tomorrows"
solar_query:
values:
- "solar power"
- "solar generation"
- "solar power generation"
- "power generation"
load_query:
values:
- "power"
- "load"
- "load power"
grid_query:
values:
- "grid"
- "grid power"
battery_query:
values:
- "battery"
- "battery charge"
- "battery level"
- "battery charge level"
- "battery status"
- "battery charge status"
- "battery state"
- "battery state of charge"
- "battery charge state"
Now you have some working custom triggers that your assistant will listen out for.
Second step, setup “intent_script” (custom actions and responses)
These are inserted into your configuration.yaml and are linked via the intent header you set in the custom intents e.g. YearOfVoice, CustomGetWeather, etc.
Here are my working “intent_script” examples.
intent_script:
YearOfVoice:
speech:
text: "Great! We're at over 40 languages and counting."
SetVolumePercent:
action:
service: "media_player.volume_set"
data:
entity_id: "{{ media_player }}"
volume_level: "{{ volume_percent / 100.0 }}"
speech:
text: "Volume {{ volume_percent|int }} percent"
SetVolumeStep:
action:
service: "media_player.volume_set"
data:
entity_id: "{{ media_player }}"
volume_level: "{{ volume_step / 10.0 }}"
speech:
text: "Volume {{ volume_step|int }}"
CustomGetWeather:
action:
- service: weather.get_forecasts
data:
type: daily
target:
entity_id: weather.openweathermap
response_variable: w
- stop: ""
response_variable: w
speech:
text: |
{% set day = 1 if (ord_day is search('tomorrow') or now().hour > 21) else 0 %}
{% set next = action_response['weather.openweathermap'].forecast[day] %}
The forecast for {{ 'tomorrow' if day else 'today'}} is a high of
{{ next.temperature}} degrees and {{ next.condition }} with a
{{ next.precipitation_probability }} percent chance of rain.
CustomGetRain:
action:
- service: weather.get_forecasts
data:
type: daily
target:
entity_id: weather.openweathermap
response_variable: w
- stop: ""
response_variable: w
speech:
text: |
{% set day = 1 if (ord_day is search('tomorrow') or now().hour > 21) else 0 %}
{% set next = action_response['weather.openweathermap'].forecast[day] %}
There is a {{ next.precipitation_probability }} percent chance of rain {{ 'tomorrow' if day else 'today'}}.
CustomGetSolarPower:
speech:
text: |
{% set response_lead = 'Solar generation' if (solar_query is search('solar generation')) else 'Solar power generation' %}
{% set response_lead = 'Solar power' if (solar_query is search('solar power')) else response_lead %}
{{ response_lead }} is currently {{ '{0:,.0f}'.format(states.sensor.pv_power.state | int) }} watts.
CustomGetLoadPower:
speech:
text: "Load power usage is currently {{ '{0:,.0f}'.format(states.sensor.load_power.state | int) }} watts."
CustomGetGridPower:
speech:
text: "Grid power usage is currently {{ '{0:,.0f}'.format(states.sensor.grid_power.state | int) }} watts."
CustomGetBatteryStatus:
speech:
text: "The battery is {{ states.sensor.battery_state_of_charge.state }}%."
Here are examples of output from my custom responses.
Q: is it going to rain
A: There is a 0 percent chance of rain today.
Q: hows the weather looking tomorrow
A: The forecast for tomorrow is a high of 15.8 degrees and sunny with a 0 percent chance of rain.
Q: whats the solar generation currently
A: Solar generation is currently 2,459 watts.
Q: how much power are we using
A: Load power usage is currently 1,232 watts.
Q: whats the battery status
A: The battery is 55%.
Q: whats the grid usage
A: Grid power usage is currently 0 watts.
They work great with the voice assistant and feel natural but I guess that depends on how you speak to your voice assistant.
Don’t forget to reload “Intent Script” in developer tools - YAML configuration reloading after making changes.
I hope this topic helps some people get started with their custom responses using the voice assistant. Please feel free to add more cool examples as I’m sure I’m only scratching the surface.
My next project is to get a reminder / timer / alarm system working as I use my voice assistant when cooking etc. it would also be nice to control the voice volume via the assistant without having to tell the voice assistant where it is.