HA personality with LLM

I have not ceded control of my home to LLMs just yet but I would like to use ChatGPT to deliver notifications with personality and variety.

To make this simple and repeatable, I’ve created a script that uses fields for tone, subject, and length inputs in a ChatGPT prompt. Using the conversation.process action, I send this to ChatGPT and then send the response to an input.text helper, which I use in the notifications.

This way I call this script in an automation and can quickly generate variety.

It works most of the time. But perhaps I’m going about this the wrong way? I’d prefer to not have to send this to a helper before using. I believe response variables would help here, but I have not had a lot of success. Is this how the conversation.process action is meant to be used? Is there a better way for injecting personality with LLMs? Thank you!

Script below:

alias: Get ChatGPT response
sequence:
  - data:
      agent_id: ----
      text: >-
        Prepare a {{tone}} notification about "{{subject}}" using no more
        than {{length}} words and not using any emoji.
    response_variable: chatgpt
    alias: ChatGPT Prompt
    enabled: true
    action: conversation.process
  - metadata: {}
    data:
      value: "{{chatgpt.response.speech.plain.speech | trim | replace('\\\"','')}}"
    target:
      entity_id: input_text.chatgpt_response
    enabled: true
    action: input_text.set_value
fields:
  tone:
    selector:
      text: null
    name: tone
    description: Describes the tone you for the ChatGPT request.
    required: true
  subject:
    selector:
      text: null
    name: subject
    description: What do you want to do?
    required: true
  length:
    selector:
      text: null
    name: length
    description: How many words do you want to limit the response to?
    default: "15"
    required: false
icon: mdi:robot-excited
mode: single

I don’t think I’ll be so helpful, but you have a response variable gpt_response but its not used. Yet you have a some variable/object named chatgpt as in chatgpt.response.speech.plain.speech that is used in a template, so I’m scratching my head on whether this actually works (does it?).

Anyway, I’ve been playing around with these actions some myself, and the following is how I used a response variable with ChatGPT to send the response in a persistent notification

actions:
  - action: conversation.process
    data:
      text: YOUR TEXT
      language: en
      agent_id: conversation.chatgpt
      conversation_id: "1"
    response_variable: answer
  - action: notify.persistent_notification
    metadata: {}
    data:
      message: >-
        Response: {{ answer['response']['speech']['plain']['speech'] }}
      title: Title of Notification

ahh, you’re right. I was trying to beat it up and had changed the variable for some testing. I changed it back to chatgpt.

My challenge seems to be persisting the variable from a script to an automation.

Just to close the loop here, the solution was simple, if a bit unintuitive.

I needed a Stop action in the script to pass the response variable to automations.

alias: Get ChatGPT response
sequence:
  - data:
      agent_id: ----
      text: >-
        Prepare a {{tone}} push notification about "{{subject}}" using no more
        than {{length}} words and not using any emoji.
    response_variable: chatgpt
    alias: ChatGPT Prompt
    enabled: true
    action: conversation.process
  - stop: returning variable
    response_variable: chatgpt
fields:
  tone:
    selector:
      text: null
    name: tone
    description: Describes the tone you for the ChatGPT request.
    required: true
  subject:
    selector:
      text: null
    name: subject
    description: What do you want to do?
    required: true
  length:
    selector:
      text: null
    name: length
    description: How many words do you want to limit the response to?
    default: "15"
    required: false
icon: mdi:robot-excited
mode: single
description: ""