How to create multiple phrases to send at random to TTS?

I really like your script can you give a concrete example of automation using it?

Thanks and apologies for the delay!

  • I’ve been using it to wake up my kids with a new funny affirmation each morning.
  • I send my wife and I different notifications when we leave the house.
  • I have a laundry nag that gets increasingly dramatic if I haven’t moved the clothes from the washer to the dryer.

Basically, I’m looking to inject personality into the messages coming from my home.

Has anyone done one of these to a local Ollama installation?

do you have specific examples “{{ states(‘input_text.chatgpt_response’) }}”

I took Greg’s code and did it for Ollama and llama3.1 locally.

get_ollama_response:
  alias: Get ollama response
  sequence:
  - data:
      agent_id: conversation.llama3_1
      text: Prepare a {{tone}} notification about "{{subject}}" using no more than
        {{length}} words and not using any emoji.
    response_variable: llama3_1
    alias: ollama Prompt
    enabled: true
    action: conversation.process
  - metadata: {}
    data:
      value: '{{llama3_1.response.speech.plain.speech | trim | replace(''\"'','''')}}'
    target:
      entity_id: input_text.llama3_1_response
    action: input_text.set_value
  fields:
    subject:
      selector:
        text:
      name: subject
      description: What do you want to do?
      required: true
    tone:
      selector:
        text:
      name: tone
      description: Describes the tone you want to add to the Ollama request.
      required: true
    length:
      selector:
        text:
      name: length
      description: How many words do you want to limit the response to?
      default: '15'
      required: true
  icon: mdi:robot-excited
  mode: single
  description: 'Ollama response with llama3.1 data file'

Sample outputs. These are from the trace so you can see input and output.

subject: What is the averate temperature in melbotne australia in the spring
tone: australian
length: 50
context:
  id: 01JDKJ1ETH2ZQR71QVJTMRP0BE
  parent_id: null
  user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
  response:
    speech:
      plain:
        speech: >-
          "G'day mate! In Melbourne, Australia during Spring (September to
          November), the average temperature ranges from 12 to 22 degrees
          Celsius. So grab your Akubra and enjoy the mild weather, but don't
          forget your jumper for those chilly mornings!"

and

length: '15'
subject: what is the average temperature inside
tone: '{{ ("grumpy", "happy", "pissed off") | random }}'
context:
  id: 01JDKH80J5M18DNF7B82Z3AFGB
  parent_id: null
  user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
  response:
    speech:
      plain:
        speech: >-
          You're looking grumpy today: Current indoor temp is 20 degrees
          Celsius.
        extra_data: null
2 Likes

I’m new to all this but the Ollama script was created to help me get my llama 3.2 set up to do these notifications. (Thanks again, Sir Goodenough!!)

In case you are asking (because I was slightly confused at first), the

“{{ states(‘input_text.chatgpt_response’) }}”

comes from a helper created in Helpers section of the Devices & services page. You will want to create a text helper from that settings page and call it something close to “chatgpt response” (mine is “Ollama response” for example). Once you have everything created, then you can call the script in your automation and use the line above in your Notification action’s message. It will force it into YAML mode and should look something like this:

action: notify.mobile_app_missys_iphone
metadata: {}
data:
  message: "\"{{ states('input_text.ollama_response') }}\""

So my automation actions look like this.

Now what I am trying to do is get the notifications to be more relevant to the calendar event instead of super random, but this is all a great base.

Massive thanks to everyone in this thread - y’all are the real MVPs. :laughing:

2 Likes

The text helper was my way of working around a lack of understanding how to pass the chatgpt conversation response directly to the next action. I’m not sure if this is the right way of doing it or not. There may be a more elegant way?

Well, it’s worked so far really well! Lemme see if I can find a better way by asking some of my colleagues. :grin:

So putting this to use, What are we doing?
Thinking of creating a weather sensor for now, maybe a forecast one as well as feed info for the bot to chew on to create a ‘current’ answer. Send that and the thing you are being informed about like ‘someone at the door’ or ‘dryer’s done’ or whatever and a random “tone” and we have something fun.

Well this is a variable you generated.


    data:
      value: "{{llama3_1.response.speech.plain.speech | trim | replace('\\\"','')}}"

The script chooses to add that to the input helper.

It is however a live variable at that point and you can run your TTS or Notification action or whatever right there, or call out another script and use that variable as data to pass to it. if you want to elegantly inline this stuff.
I kinda like the input helper, but will probably use it to call another script like my blueprint script for TTS…

Thanks for sharing. I will integrate into my automation. Have a nice day.

1 Like

I am hoping to use this as a way to send notification reminders about chores - I want to use it in actionable notifications on our iOS devices but also eventually as voice reminders so it’s harder to ignore.

I talked with JLo about all this and he says that we could use a response variable to pass the response directly to the notification action. I am waiting patiently for him to have time to explain how. :smiley: I’ll share here once I have more details.

I probably described it poorly, but that what I said above.

"{{llama3_1.response.speech.plain.speech | trim | replace('\\\"','')}}"

is the formatted response variable with the junk removed.

Add the notification action directly after this action ( in the same script) and plug that template into the message to send.

Anyone got a working version of this for 2024? None of these codes in this old post works and its a top google result.

This is great, well done, could you explain a little more on how you did the above? I’d love to include something like this in my cinema room, as I have some lighting and Plex prerolls currently and want to take it up a notch. I assume that all of the options are all local, not through ChatGPT or Gemini so no cost per command etc?