Great work here! Awesome addition to HA
Hey all,
this project will no longer be maintained. The responding services feature in Home Assistant’s 2023.7 release enables me to use the OpenAI Conversation integration for all my personal use cases.
Thanks to everyone who responded with issues, pull requests and on the Home Assistant community thread.
I’ll show here how you can modify Example 2: Single Automation to use the new native feature. It’s overall much simpler than before, which is nice. The only tricky thing is figuring out the agent_id
. Go to developer tools → services → conversation.process
. In the UI mode, select your OpenAI conversation agent from the Agent drop-down, and then switch to YAML mode to see the id.
alias: Say With ChatGPT
trigger:
- platform: event
event_type: your_trigger_event_type
action:
- service: conversation.process
data:
agent_id: <agent_id goes here>
text: Write a happy, one line, spoken language reminder for the 'cleaning' calendar event.
response_variable: chatgpt
- service: notify.mobile_app_<your_device_id_here>
data:
message: TTS
data:
tts_text: "{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}"
I just keep getting “Sorry I didn’t understand” responses from the AI, using your suggestion in here.
I have setup the OpenAI integration, using the default settings. Have I missed something?
Sounds like you maybe haven’t configured the correct conversation agent?
Also, it works fine for my use cases to replace the system message (“prompt template” in the openai config) that’s sent every time with a space (’ '). That saves on tokens.
Had your old app working great now trying to use your new example to talk using my Google mini and the built in integration. It runs with no errors but it doesn’t say anything. Any ideas?
Thanks
alias: Say With ChatGPT
trigger: []
action:
- service: conversation.process
data:
agent_id: (hidden)
text: Write a happy, one line, spoken language reminder to feed the dogs.
response_variable: chatgpt
- service: tts.google_translate_say
data:
language: en
entity_id: media_player.living_room_speaker
message: TTS
data:
tts_text: "{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}"
Are you running 2023.7.2 (or higher)? I ran into nasty issues with 2023.7.0 that were fixed in .2.
Also general tip is is to run both steps in the developer tools to see if the separate actions work as intended.
Thanks. Haven’t used the developer tools before so not sure what to look at in there. But I did grab the traces of the automation running. Also, this is on Yellow hardware.
Looks like the chatgpt part is working and the data is passed to the next action, so that’s great!
So I guess that there’s an issue with the TTS action parameters. See if you can get the TTS to work in the services tab in the developer tools. Then use those settings in the automation (but with the chatgpt output). If you get stuck then googling for a tutorial should give more detailed guides for the dev tools.
Thanks for the tip. Easy when you know where to look/test!
For anyone else with this issue, here is the working code:
alias: Say With ChatGPT
trigger: []
action:
- service: conversation.process
data:
agent_id: (hidden)
text: Write a random happy, one line, spoken language reminder to feed the dogs.
response_variable: chatgpt
- service: tts.google_translate_say
data:
entity_id: media_player.living_room_speaker
message: "{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}"
Question for you, @jjbankert
I’m writing an app for Hubitat to use chatGPT like you did here. I have it connecting but only getting gibberish back, like it’s quoting parts of a book. Wondering if you could look at this code snippet and see if I’m hitting the wrong endpoint or missing something. Thanks for looking!
def message = "write a random happy reminder to feed the dogs"
def apiUrl = "https://api.openai.com/v1/completions"
def headers = [
"Authorization": "Bearer ${apiKey}",
"Content-Type": "application/json"
]
def requestBody = [
model: "davinci",
prompt: message,
max_tokens: 150,
top_p: 1,
temperature: 1,
stop: "\n",
]
httpPostJson(uri: apiUrl, body: requestBody, headers: headers) { response ->
A bit late but hopefully useful to someone.
The “davinci” model is gpt 3.0 iirc, so you might want to use a newer one. Older GPT models generate grammatically correct but content-wise nonsense text.
Maybe check the temperature value. I got GPT 3.5 to return random pdf texts if I cranked the temp.