I have a folder named custom_sentences/en with 3 yaml files in it. I have another folder named conversations with matching files in it. The most recent specifically is a file name test.yaml. And I have a Home Assistant Voice Preview Edition, configured to use a Assist Pipeline named MyAssistant.
Nathan, thanks for the suggestion. Right after I posted, I supected there might be an issue with the conversation portion. So I uninstalled integration Open AI Conversation, restarted, reinstalled it, and reinstarted it again. The error changed to “Error talking to OpenAI”. I tried both voice run and text run in the debug, same error. Here is th eouput code for the text command attempt:
Error talking to openai is the generic openai error. Means it isn’t even making it that far.
This happens if the oai integration can’t understand your prompt or intent scripts.
(busted prompt - look for non JSON safe characters… busted intent script, I just kill em all and add them back one at a time untyi find the broken one then debug that. )
Read: the error isn’t your script. It’s either your intent or prompt with that error.
Honestly to get back from where you are… Collapse or eliminate the sentences and the intent scripts (I’d be going back to basic eliminate them entirely start over. It’s hard to debug this one trust me, it’s actually one of the hardest, you’re almost flying blind. ) and go back to something like having it set a text field.
When you eliminate them and can successfully get a basic prompt to fire through open AI again then you can move on to next.
When you get there I recommend using a script rather than an intent script for your first. They’re WAY less complicated to setup and easier to debug through traces.
Nathan, thanks for your help. I did what you suggested and put everything in configuration.yaml, And I had to edit out some of my code lines to get it to work. It went from this:
conversation:
intents:
test_command:
- "test command"
- "do a test"
intent_script:
test_command:
speech:
text: "Test command received!"
action:
- service: notify.persistent_notification
data:
message: "The test command was triggered."
just to get the file to pass the configuration checker. And then I had to switch the conversation agent from ChatGPT to Home Assistant. And once I did that it now works as it is supposed to.
Why doesn’t it work with ChatGPT?
So now I need some help. I want to split it out like I had it - reason being I would like to have multiple conversation files and matching intent files. I want a pair for device control commands, another for general conversation (briefings on status of things, like at bedtime I want to know status of doors, outdoor lights,alrm setting, etc,), another for prayers and blessings, etc.
So what is the process for doing that? What is in the configuration.yaml file? What is the “header” line(s) for the conversation files and do the files go in custom_sentences/en? And what about intent_script files? Header?
The biggest thing to remember when splitting up earlier… Where you had Intent_script: then the include?
That Intent_script: IS the key header.
Meaning only the contents that would be under that key go i the files the include refers to… So you leave the Intent_script entry. Scoop out all of its contents dump it ina file then replace what you yoinked out with that include.
I usually do one file at this point to make everything easy.
The once thats working you can change the include statement to grab all files in that folder then start duping that file and chopping it into pieces.
Remember what I said about Intent_script being a b** to troubleshoot? This is EXACTLY why.
When you suspect one is the issue you can just move it out now.
Whats not working. (and yes we fix this before splitting anything, like I said intent scripts are tied up with the conversation agent and one absolutely can break the other. So if it’s not working you’re jot fixed yet)
Just wait until you get into multi agents and MCP and…
Split slowly. One step at a time. You’ll be fine.
Then go read Friday’s Party. It’ll help you understand how. I decide is this an Intent Script or something else. (in most cases something else and an early warning on tool sprawl - you understand after reading Friday)
Wow, my first thought was “What is he talking about and what is Friday’s Party?” But a quick google search found it. I just started and noticed one thing that I wanted to give back a quick comment. I have already set up and have running locally my own AI. It is in its infancy, but it works. And it is VERY resource intensive.
I am a very senior guy with a lot of life behind me and a fixed budget ahead. So my hardware is much more modest - I just want to play and learn. I found this used on ebay: Dell OptiPlex 7050 MT Intel i7-7700 16GB. I added two used Dell Nvidia Quadro M2000 4GB GDDR5 Display Port x4 Graphics Cards, and upgrade the RAM to 64GB. It works reasonably well.
I am about half way through scannng your article and OMG, I am nowhere close to your skill level. If I live long enough maybe i can get part way there! For right now, I just want to be able to ask my AI a question and get an answer. BTW, mine is Jarvis - probably a reason why he is mine and yours is Friday!!
Yeah im pushing to the absolute extreme of what’s possible for a reason. I know most won’t do what I’m doing. But you can absolutely use the principles in your build. Just keep remembering
Whats my Context
What state data do I have
What possible actions could I do.
If you’ve explained that well enough to the llm you’re gold. Everything else is infrastructure issues.
I thing I misunderstood what you were teling me to do? I thought if I left intent_script: !include . . . . in the configuration.yaml file, then it shouldn’t also be in the test.yaml file? Anway, I have something wrong.
This is the very first entry of intent_script.yaml:
(Notice what’s NOT in this file)
input_text_set_value:
description: >
"Set an 'entity_id' from the input_text domain to a specific 'VALUE'.
example: >
```json
{
'name': 'input_text.ai_system_shared_temp_register',
'value': "{'coop':{'chickens':3}}"
}
```"
parameters:
name: # name, required: the name of the input_text entity to set
value: # value (text), required: the text to set (string, up to 255 characters, will truncate overflow)
action:
service: "input_text.set_value"
data:
entity_id: >
{%- set dot = name.find('.') -%}
{%- if dot == -1 -%}
{%- set entity_id = 'input_text.'+ slugify( name | replace(':','') ) -%}
{%- else -%}
{%- if name[0:11] == 'input_text.' -%}
{%- set entity_id = name -%}
{%- else -%}
{%- set entity_id = '' -%}
{%- endif -%}
{%- endif -%}
{{- entity_id -}}
value: >
{%- set value = value[0:255] | trim() -%}
{{- value -}}
speech:
text: "{{ name }} set to {{ value[0:255] | trim() }}"
It worked fine (i.e., I got notifications both texting “test commnd” and voice "test command’) - when all of the code was in configuration.yaml. But as soon as I split it out, I got no notification either way.