Since after I integrated Extended OpenAI Conversation I was looking for a way to automate OpenAI asking me questions and executing my answer, as I haven’t found another solution in the forums, I am sharing mine.
I managed to find a way of doing it by using the awesome integration from AlexxIT StreamAssist that allow us to use any camera to activate assist.
With this integration you can select at what stage of the pipeline to start.
I created a script that start the pipeline from the intent and I pass the intent directly as ’ Ask me: “Do you want to turn on the mirror light ?”
Then I activate the pipeline once again starting from STT (Speach to text) so skipping the wakeword.
Both scripts are using the same conversation_id so that OpenAI will remember what it asked.
Example of a working script:
alias: OpenAI ask questions
sequence:
- service: stream_assist.run
data:
camera_entity_id: camera.reolink_sala_1_sub
player_entity_id: media_player.tablet
stt_start_media: media-source://media_source/local/beep.mp3
assist:
start_stage: intent #Start the voice pipeline directly from the intent
end_stage: tts
intent_input: >-
Ask me "Do you want to turn on the mirror's light?" #The intent provided is the question that OpenAI have to ask me
conversation_id: 01HV8SNMZXEHKJYYR38GD2Y27G #Conversation_ID is fundamental to allow OpenAI to remember the question when I'll answer
- delay: #Delay is required to give OpenAI the time to complete the question before starting the assistant again
hours: 0
minutes: 0
seconds: 5
milliseconds: 0
enabled: true
- service: stream_assist.run
data:
camera_entity_id: camera.reolink_sala_1_sub
player_entity_id: media_player.tablet
stt_start_media: media-source://media_source/local/beep.mp3
assist:
start_stage: stt #Now starting the pipeline from Speach To Text so I can answer
end_stage: tts
conversation_id: 01HV8SNMZXEHKJYYR38GD2Y27G
enabled: true
mode: single
Probably there is a way to do sort of the same without Stream Assist.
This is excellent. Searched all net only this one bring a light for continous conversation. I write a Continuous Conversation Script based on your example:
alias: AI ask questions
sequence:
- alias: Ask me a question
service: stream_assist.run
data:
player_entity_id: media_player.kodi
assist:
start_stage: intent
end_stage: tts
intent_input: ask me "Yes?"
conversation_id: 01HV8SNMZXEHKJYYR38GD2Y27G
response_variable: cv_response
- alias: Loop listen & execution Until Continue_Conversation is on
repeat:
sequence:
- alias: Sequence to flag for continue conversation or not
sequence:
- variables:
resp_type: >-
{{
cv_response['intent-end'].data.intent_output.response.response_type
}}
resp_last_input_speech: "{{ cv_response['intent-start'].data.intent_input }}"
resp_last_output_speech: >-
{{
cv_response['intent-end'].data.intent_output.response.speech.plain.speech
}}
resp_last_char: "{{ resp_last_output_speech[-4:] }}"
alias: >-
Get resp_type, initialize resp_last_speeches (input & output),
resp_last_char and continuous flag
- service: system_log.write
metadata: {}
data:
level: error
message: >-
{{ resp_last_input_speech }} || {{ resp_last_output_speech }}
|| {{ resp_last_char }} || {{ resp_type}} || {{ ("?" in
resp_last_char) or ("?" in resp_last_char) }}
- if:
- condition: template
value_template: "{{ (\"?\" in resp_last_char) or (\"?\" in resp_last_char) }}"
alias: Test is last speech ended with a "?" marked
then:
- service: input_boolean.turn_on
metadata: {}
data: {}
target:
entity_id: input_boolean.continue_conversation
else:
- service: input_boolean.turn_off
metadata: {}
data: {}
target:
entity_id: input_boolean.continue_conversation
enabled: true
alias: Decison to set continue flag "ON" or "OFF" ?
- alias: Listen to user if continue flag "ON"
if:
- condition: state
entity_id: input_boolean.continue_conversation
state: "on"
then:
- wait_for_trigger:
- platform: state
entity_id:
- media_player.kodi
to: idle
enabled: true
timeout:
hours: 0
minutes: 1
seconds: 0
milliseconds: 0
enabled: true
continue_on_timeout: true
- alias: Listen and execute user speech
service: stream_assist.run
data:
stream_source: rtsp://192.168.2.34/unicast
player_entity_id: media_player.kodi
assist:
start_stage: stt
end_stage: tts
conversation_id: 01HV8SNMZXEHKJYYR38GD2Y27G
pipeline_id: 01gzv2xe278epc0svn0cgm3p3p
enabled: true
response_variable: cv_response
until:
- condition: state
entity_id: input_boolean.continue_conversation
state: "off"
enabled: true
mode: single
description: ""
fields: {}