How to create multiple phrases to send at random to TTS?

yeah cool, but it’s not really much easier than my option, I mean you still can’t edit the list via the UI so a restart is likely required for any changes. I liked your idea of a text helper because it would mean editing from the frontend without any restart/reload required.

How can make it not saying the same option if it said it that before for example if it say “Halloween 1” the for the next 10 minutes not say it again. Thanks!!!:pray:

So I took slightly different approach to ‘random’, instead of building the list of predefined sentences, I ask OpenAI to generate message for me. This way I have one automation, but endless selection of random text. Here is sample code for entering/leaving specific zone.

  - alias: zone ChatGPT TTS notifications
    id: zone_chatgpt_tts_notifications
    mode: queued
    initial_state: true
    trigger:
      - platform: zone
        entity_id: person.mirek_malinowski, person.dorota_malinowska
        zone: zone.home
        event: enter
      - platform: zone
        entity_id: person.mirek_malinowski, person.dorota_malinowska
        zone: zone.home
        event: leave

        # More zones goes  here

    action:
      - service: conversation.process
        data:
          agent_id: id_redacted
          text: |
            {% if (trigger.event) == "leave" %}
              Prepare humorous notification about {{ trigger.to_state.attributes.friendly_name }} being now outside of {{ trigger.zone.attributes.friendly_name }}, using no more than 15 words and not using emoji.
            {% else %}
              Prepare humorous notification about {{ trigger.to_state.attributes.friendly_name }} being now close to {{ trigger.zone.attributes.friendly_name }}, using no more than 15 words and not using emoji.
            {% endif %}
        response_variable: chatgpt
      - service: tts.cloud_say
        data_template:
          entity_id: media_player.google_home_mini
          language: pl-PL
          message: |
            {{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}
2 Likes

That is great and far more flexible and scalable.

What integration are using for openAI that enables the conversation.process service?

I’m using HA native OpenAI Conversation integration.

1 Like

Would you consider pulling this chatgpt random script together into a ‘Community Guideline’ and adding it to the Cookbook we are creating here on the forums?
The Home Assistant Cookbook - Index.
I think it would be a fantastic addition, and since it is your code and idea, I ddn’t want to just copy you to get it there.

(It might also make a really nice script blueprint to share as you already have the code debugged…)

@Stiltjack FYI…

1 Like

Using @mirekmal’s example, I created a script called Get ChatGPT response with field values for parts of the chatgpt prompt so I can seamlessly reuse the script within my automations.

I couldn’t figure out how to pass the script response directly into another action, so I created an Input Text helper first to house the ChatGPT response each time.

alias: Get ChatGPT response
sequence:
  - service: conversation.process
    data:
      agent_id: id_redacted
      text: >-
        Prepare a {{tone}} notification about "{{subject}}" using no more than
        {{length}} words and not using any emoji.
    response_variable: chatgpt
    alias: ChatGPT Prompt
    enabled: true
  - service: input_text.set_value
    metadata: {}
    data:
      value: "{{chatgpt.response.speech.plain.speech | trim | replace('\\\"','')}}"
    target:
      entity_id: input_text.chatgpt_response
fields:
  subject:
    selector:
      text: null
    name: subject
    description: What do you want to do?
    required: true
  tone:
    selector:
      text: null
    name: tone
    description: Describes the tone you want to add to the ChatGPT request.
    required: true
  length:
    selector:
      text: null
    name: length
    description: How many words do you want to limit the response to?
    default: "15"
    required: true
icon: mdi:robot-excited
mode: single

In my automations, I first call the script above, then add the following template wherever I want to include the response:
"{{ states('input_text.chatgpt_response') }}"

Seems to work pretty well so far.

3 Likes

‘tone’ … Thanks for that, I have a new input select

  • cheeky
  • sarcastic
  • informational
  • bombastic
  • disinterested
  • scathing
  • amorous
  • angry

I really like your script can you give a concrete example of automation using it?

Thanks and apologies for the delay!

  • I’ve been using it to wake up my kids with a new funny affirmation each morning.
  • I send my wife and I different notifications when we leave the house.
  • I have a laundry nag that gets increasingly dramatic if I haven’t moved the clothes from the washer to the dryer.

Basically, I’m looking to inject personality into the messages coming from my home.

Has anyone done one of these to a local Ollama installation?

do you have specific examples “{{ states(‘input_text.chatgpt_response’) }}”

I took Greg’s code and did it for Ollama and llama3.1 locally.

get_ollama_response:
  alias: Get ollama response
  sequence:
  - data:
      agent_id: conversation.llama3_1
      text: Prepare a {{tone}} notification about "{{subject}}" using no more than
        {{length}} words and not using any emoji.
    response_variable: llama3_1
    alias: ollama Prompt
    enabled: true
    action: conversation.process
  - metadata: {}
    data:
      value: '{{llama3_1.response.speech.plain.speech | trim | replace(''\"'','''')}}'
    target:
      entity_id: input_text.llama3_1_response
    action: input_text.set_value
  fields:
    subject:
      selector:
        text:
      name: subject
      description: What do you want to do?
      required: true
    tone:
      selector:
        text:
      name: tone
      description: Describes the tone you want to add to the Ollama request.
      required: true
    length:
      selector:
        text:
      name: length
      description: How many words do you want to limit the response to?
      default: '15'
      required: true
  icon: mdi:robot-excited
  mode: single
  description: 'Ollama response with llama3.1 data file'

Sample outputs. These are from the trace so you can see input and output.

subject: What is the averate temperature in melbotne australia in the spring
tone: australian
length: 50
context:
  id: 01JDKJ1ETH2ZQR71QVJTMRP0BE
  parent_id: null
  user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
  response:
    speech:
      plain:
        speech: >-
          "G'day mate! In Melbourne, Australia during Spring (September to
          November), the average temperature ranges from 12 to 22 degrees
          Celsius. So grab your Akubra and enjoy the mild weather, but don't
          forget your jumper for those chilly mornings!"

and

length: '15'
subject: what is the average temperature inside
tone: '{{ ("grumpy", "happy", "pissed off") | random }}'
context:
  id: 01JDKH80J5M18DNF7B82Z3AFGB
  parent_id: null
  user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
  response:
    speech:
      plain:
        speech: >-
          You're looking grumpy today: Current indoor temp is 20 degrees
          Celsius.
        extra_data: null
2 Likes

I’m new to all this but the Ollama script was created to help me get my llama 3.2 set up to do these notifications. (Thanks again, Sir Goodenough!!)

In case you are asking (because I was slightly confused at first), the

“{{ states(‘input_text.chatgpt_response’) }}”

comes from a helper created in Helpers section of the Devices & services page. You will want to create a text helper from that settings page and call it something close to “chatgpt response” (mine is “Ollama response” for example). Once you have everything created, then you can call the script in your automation and use the line above in your Notification action’s message. It will force it into YAML mode and should look something like this:

action: notify.mobile_app_missys_iphone
metadata: {}
data:
  message: "\"{{ states('input_text.ollama_response') }}\""

So my automation actions look like this.

Now what I am trying to do is get the notifications to be more relevant to the calendar event instead of super random, but this is all a great base.

Massive thanks to everyone in this thread - y’all are the real MVPs. :laughing:

2 Likes

The text helper was my way of working around a lack of understanding how to pass the chatgpt conversation response directly to the next action. I’m not sure if this is the right way of doing it or not. There may be a more elegant way?

Well, it’s worked so far really well! Lemme see if I can find a better way by asking some of my colleagues. :grin:

So putting this to use, What are we doing?
Thinking of creating a weather sensor for now, maybe a forecast one as well as feed info for the bot to chew on to create a ‘current’ answer. Send that and the thing you are being informed about like ‘someone at the door’ or ‘dryer’s done’ or whatever and a random “tone” and we have something fun.

Well this is a variable you generated.


    data:
      value: "{{llama3_1.response.speech.plain.speech | trim | replace('\\\"','')}}"

The script chooses to add that to the input helper.

It is however a live variable at that point and you can run your TTS or Notification action or whatever right there, or call out another script and use that variable as data to pass to it. if you want to elegantly inline this stuff.
I kinda like the input helper, but will probably use it to call another script like my blueprint script for TTS…

Thanks for sharing. I will integrate into my automation. Have a nice day.

1 Like

I am hoping to use this as a way to send notification reminders about chores - I want to use it in actionable notifications on our iOS devices but also eventually as voice reminders so it’s harder to ignore.

I talked with JLo about all this and he says that we could use a response variable to pass the response directly to the notification action. I am waiting patiently for him to have time to explain how. :smiley: I’ll share here once I have more details.