So I took slightly different approach to ‘random’, instead of building the list of predefined sentences, I ask OpenAI to generate message for me. This way I have one automation, but endless selection of random text. Here is sample code for entering/leaving specific zone.
- alias: zone ChatGPT TTS notifications
id: zone_chatgpt_tts_notifications
mode: queued
initial_state: true
trigger:
- platform: zone
entity_id: person.mirek_malinowski, person.dorota_malinowska
zone: zone.home
event: enter
- platform: zone
entity_id: person.mirek_malinowski, person.dorota_malinowska
zone: zone.home
event: leave
# More zones goes here
action:
- service: conversation.process
data:
agent_id: id_redacted
text: |
{% if (trigger.event) == "leave" %}
Prepare humorous notification about {{ trigger.to_state.attributes.friendly_name }} being now outside of {{ trigger.zone.attributes.friendly_name }}, using no more than 15 words and not using emoji.
{% else %}
Prepare humorous notification about {{ trigger.to_state.attributes.friendly_name }} being now close to {{ trigger.zone.attributes.friendly_name }}, using no more than 15 words and not using emoji.
{% endif %}
response_variable: chatgpt
- service: tts.cloud_say
data_template:
entity_id: media_player.google_home_mini
language: pl-PL
message: |
{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}
Would you consider pulling this chatgpt random script together into a ‘Community Guideline’ and adding it to the Cookbook we are creating here on the forums? The Home Assistant Cookbook - Index.
I think it would be a fantastic addition, and since it is your code and idea, I ddn’t want to just copy you to get it there.
(It might also make a really nice script blueprint to share as you already have the code debugged…)
Using @mirekmal’s example, I created a script called Get ChatGPT response with field values for parts of the chatgpt prompt so I can seamlessly reuse the script within my automations.
I couldn’t figure out how to pass the script response directly into another action, so I created an Input Text helper first to house the ChatGPT response each time.
alias: Get ChatGPT response
sequence:
- service: conversation.process
data:
agent_id: id_redacted
text: >-
Prepare a {{tone}} notification about "{{subject}}" using no more than
{{length}} words and not using any emoji.
response_variable: chatgpt
alias: ChatGPT Prompt
enabled: true
- service: input_text.set_value
metadata: {}
data:
value: "{{chatgpt.response.speech.plain.speech | trim | replace('\\\"','')}}"
target:
entity_id: input_text.chatgpt_response
fields:
subject:
selector:
text: null
name: subject
description: What do you want to do?
required: true
tone:
selector:
text: null
name: tone
description: Describes the tone you want to add to the ChatGPT request.
required: true
length:
selector:
text: null
name: length
description: How many words do you want to limit the response to?
default: "15"
required: true
icon: mdi:robot-excited
mode: single
In my automations, I first call the script above, then add the following template wherever I want to include the response: "{{ states('input_text.chatgpt_response') }}"
I took Greg’s code and did it for Ollama and llama3.1 locally.
get_ollama_response:
alias: Get ollama response
sequence:
- data:
agent_id: conversation.llama3_1
text: Prepare a {{tone}} notification about "{{subject}}" using no more than
{{length}} words and not using any emoji.
response_variable: llama3_1
alias: ollama Prompt
enabled: true
action: conversation.process
- metadata: {}
data:
value: '{{llama3_1.response.speech.plain.speech | trim | replace(''\"'','''')}}'
target:
entity_id: input_text.llama3_1_response
action: input_text.set_value
fields:
subject:
selector:
text:
name: subject
description: What do you want to do?
required: true
tone:
selector:
text:
name: tone
description: Describes the tone you want to add to the Ollama request.
required: true
length:
selector:
text:
name: length
description: How many words do you want to limit the response to?
default: '15'
required: true
icon: mdi:robot-excited
mode: single
description: 'Ollama response with llama3.1 data file'
Sample outputs. These are from the trace so you can see input and output.
subject: What is the averate temperature in melbotne australia in the spring
tone: australian
length: 50
context:
id: 01JDKJ1ETH2ZQR71QVJTMRP0BE
parent_id: null
user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
response:
speech:
plain:
speech: >-
"G'day mate! In Melbourne, Australia during Spring (September to
November), the average temperature ranges from 12 to 22 degrees
Celsius. So grab your Akubra and enjoy the mild weather, but don't
forget your jumper for those chilly mornings!"
and
length: '15'
subject: what is the average temperature inside
tone: '{{ ("grumpy", "happy", "pissed off") | random }}'
context:
id: 01JDKH80J5M18DNF7B82Z3AFGB
parent_id: null
user_id: c0fba61d419c44a1892abc47bf3065ae
llama3_1:
response:
speech:
plain:
speech: >-
You're looking grumpy today: Current indoor temp is 20 degrees
Celsius.
extra_data: null
I’m new to all this but the Ollama script was created to help me get my llama 3.2 set up to do these notifications. (Thanks again, Sir Goodenough!!)
In case you are asking (because I was slightly confused at first), the
“{{ states(‘input_text.chatgpt_response’) }}”
comes from a helper created in Helpers section of the Devices & services page. You will want to create a text helper from that settings page and call it something close to “chatgpt response” (mine is “Ollama response” for example). Once you have everything created, then you can call the script in your automation and use the line above in your Notification action’s message. It will force it into YAML mode and should look something like this:
The text helper was my way of working around a lack of understanding how to pass the chatgpt conversation response directly to the next action. I’m not sure if this is the right way of doing it or not. There may be a more elegant way?
So putting this to use, What are we doing?
Thinking of creating a weather sensor for now, maybe a forecast one as well as feed info for the bot to chew on to create a ‘current’ answer. Send that and the thing you are being informed about like ‘someone at the door’ or ‘dryer’s done’ or whatever and a random “tone” and we have something fun.
data:
value: "{{llama3_1.response.speech.plain.speech | trim | replace('\\\"','')}}"
The script chooses to add that to the input helper.
It is however a live variable at that point and you can run your TTS or Notification action or whatever right there, or call out another script and use that variable as data to pass to it. if you want to elegantly inline this stuff.
I kinda like the input helper, but will probably use it to call another script like my blueprint script for TTS…
I am hoping to use this as a way to send notification reminders about chores - I want to use it in actionable notifications on our iOS devices but also eventually as voice reminders so it’s harder to ignore.
I talked with JLo about all this and he says that we could use a response variable to pass the response directly to the notification action. I am waiting patiently for him to have time to explain how. I’ll share here once I have more details.