I have not ceded control of my home to LLMs just yet but I would like to use ChatGPT to deliver notifications with personality and variety.
To make this simple and repeatable, I’ve created a script that uses fields for tone, subject, and length inputs in a ChatGPT prompt. Using the conversation.process
action, I send this to ChatGPT and then send the response to an input.text
helper, which I use in the notifications.
This way I call this script in an automation and can quickly generate variety.
It works most of the time. But perhaps I’m going about this the wrong way? I’d prefer to not have to send this to a helper before using. I believe response variables would help here, but I have not had a lot of success. Is this how the conversation.process
action is meant to be used? Is there a better way for injecting personality with LLMs? Thank you!
Script below:
alias: Get ChatGPT response
sequence:
- data:
agent_id: ----
text: >-
Prepare a {{tone}} notification about "{{subject}}" using no more
than {{length}} words and not using any emoji.
response_variable: chatgpt
alias: ChatGPT Prompt
enabled: true
action: conversation.process
- metadata: {}
data:
value: "{{chatgpt.response.speech.plain.speech | trim | replace('\\\"','')}}"
target:
entity_id: input_text.chatgpt_response
enabled: true
action: input_text.set_value
fields:
tone:
selector:
text: null
name: tone
description: Describes the tone you for the ChatGPT request.
required: true
subject:
selector:
text: null
name: subject
description: What do you want to do?
required: true
length:
selector:
text: null
name: length
description: How many words do you want to limit the response to?
default: "15"
required: false
icon: mdi:robot-excited
mode: single