triggers:
- trigger: conversation
command:
- Was siehst Du im Hof
- Wer steht vor der HaustĂĽr
- Was ist vor der HaustĂĽr
- Wer ist im Hof
- Siehst Du was im Hof
- Wer steht im Hof
with action:
action: conversation.process
data:
text: Wer ist im Hof
language: de
agent_id: conversation.azure_openai_conversation
Response over Chat (App, Browser,etc.) → working
init_options:
start_stage: intent
end_stage: intent
input:
text: Wer steht im Hof
pipeline: ***
conversation_id: null
stage: done
run:
pipeline: ***
language: de
conversation_id: ***
runner_data:
stt_binary_handler_id: null
timeout: 300
events:
- type: run-start
data:
pipeline: ***
language: de
conversation_id: ***
runner_data:
stt_binary_handler_id: null
timeout: 300
timestamp: "2025-06-26T13:42:50.467398+00:00"
- type: intent-start
data:
engine: conversation.azure_openai_conversation
language: de-DE
intent_input: Wer steht im Hof
conversation_id: ***
device_id: null
prefer_local_intents: true
timestamp: "2025-06-26T13:42:50.467427+00:00"
- type: intent-end
data:
processed_locally: false
intent_output:
response:
speech:
plain:
speech: Ich kann keine Person oder ein Tier in diesem Bild erkennen.
extra_data: null
card: {}
language: "*"
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
timestamp: "2025-06-26T13:42:52.500402+00:00"
- type: run-end
data: null
timestamp: "2025-06-26T13:42:52.500469+00:00"
intent:
engine: conversation.azure_openai_conversation
language: de-DE
intent_input: Wer steht im Hof
conversation_id: ***
device_id: null
prefer_local_intents: true
done: true
processed_locally: false
intent_output:
response:
speech:
plain:
speech: Ich kann keine Person oder ein Tier in diesem Bild erkennen.
extra_data: null
card: {}
language: "*"
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
Response over conversation.process (Developer - Action or HomeAssistantAssit Alexa) → not working
response:
speech:
plain:
speech: >-
Im Hof gibt es aktuell keine Hinweise auf Bewegung oder Anwesenheit. Die
Lichter im Hof sind aus. Es sind keine Sensoren aktiv, die eine Person
im Hof erkennen wĂĽrden.
extra_data: null
card: {}
language: de
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
Anybody a idea why chat (text / voice) and conversation.process different?
alias: "LLM: Was siehst Du im Hof"
description: ""
triggers:
- trigger: conversation
command:
- Was siehst Du im Hof
- Wer steht vor der HaustĂĽr
- Was ist vor der HaustĂĽr
- Wer ist im Hof
- Siehst Du was im Hof
- Wer steht im Hof
conditions: []
actions:
- action: llmvision.image_analyzer
*** Shortened because irrelevant ***
response_variable: response
- set_conversation_response: "{{response.response_text}}"
mode: single
Call via Assist Web:
stage: done
run:
pipeline: ***
language: de
conversation_id: ***
runner_data:
stt_binary_handler_id: null
timeout: 300
events:
- type: run-start
data:
pipeline: ***
language: de
conversation_id: ***
runner_data:
stt_binary_handler_id: null
timeout: 300
timestamp: "2025-06-27T06:30:14.715723+00:00"
- type: intent-start
data:
engine: conversation.azure_openai_conversation
language: de-DE
intent_input: Wer ist im Hof
conversation_id: ***
device_id: null
prefer_local_intents: true
timestamp: "2025-06-27T06:30:14.715758+00:00"
- type: intent-end
data:
processed_locally: false
intent_output:
response:
speech:
plain:
speech: Ich kann keine Person oder ein Tier in dem Bild erkennen.
extra_data: null
card: {}
language: "*"
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
timestamp: "2025-06-27T06:30:17.457137+00:00"
- type: run-end
data: null
timestamp: "2025-06-27T06:30:17.457198+00:00"
intent:
engine: conversation.azure_openai_conversation
language: de-DE
intent_input: Wer ist im Hof
conversation_id: ***
device_id: null
prefer_local_intents: true
done: true
processed_locally: false
intent_output:
response:
speech:
plain:
speech: Ich kann keine Person oder ein Tier in dem Bild erkennen.
extra_data: null
card: {}
language: "*"
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
= Working
Call via Action (Dev Console or via HomeAssistantAssist-AlexaSkill)
action: conversation.process
data:
text: Wer ist im Hof
language: de
agent_id: conversation.azure_openai_conversation
Response:
response:
speech:
plain:
speech: >-
Es gibt keine Sensoren oder Hinweise darauf, dass aktuell jemand im Hof
ist. Bewegungsmelder im Hofbereich melden nichts.
extra_data: null
card: {}
language: de
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: ***
continue_conversation: false
= Wrong/differnet answer.
It doesn’t use the automation it use tool assist access.
So again:
Why if I call the sentence in Web or ios app it calls the automation and didn’t call the automation if I use conversation.process action?
Consider the conversation.process action as a single request to the LLM. It does not interact with other objects unless you explicitly pass information through a variable or attach files. And does not run intents/automations
You may be confused by the naming, but these are completely different things. You specified phrases for the trigger, and when you say them, the automation is triggered.
The conversation.process action has nothing to do with this.
If it doesn’t work, it would be great if someone had an idea of how I could still do the automation via:
Home Assistant Voice Assit - works well every call
Home Assistant Voice Assit Text Chat - works well every call
and
Home Assistant Assit (Alexa skill that accesses conversation.process) - it doesn’t work now
Everything (lights, shades, asking for room temperature,…) is
working with this skill over conversation.process in the same way if I use
for example the Text Chat in the Home Assistant App (Assist).
But automations don’t work with this (but with for example the Text Chat in the Home Assistant App Assist).
So I search for a tip to trigger the same automation with the skill as with the Assist.
Please excuse me, but it is not a problem of the author of the skill,
but a friendly question to the forum, why an automation can be started via LLM Voice or Text Chat in HA, but can’t be start via conversation.process.
Maybe someone else in the forum has an idea how this can be solved to make an api call similar to conversation.process to start the automation with a sentence.
Since the problem does not work via HA → Dev → Actions → conversation.process , it is not a problem of the Alexa Skill or the HomaAssistantAssit author.
See manual: Conversation - Home Assistant Quote:
" The Conversation integration allows you to converse with Home Assistant. You can either converse by pressing the microphone in the frontend (supported browsers only (no iOS)) or by calling the conversation/process action with the transcribed text."
The instructions state that there is NO difference between frontend and action (as you say).
Again: However, automations are addressed correct via frontend or microphone, but not via the action conversation.process.
This is not a list of issues, but a log of updates to the documentation page. It can be said that it has not been updated since 2023.
You do not want to accept the information at all; I have already explained everything to you in my previous answers.