Assist ignore Automations with conversation.process

I use a automation to ask Assist (linked to Azure OpenAI ChatGPT Conversation) what it can see on a Camera.

The call of the automation works great via Home Assistant App - Assist or via Assist Chat from the PC.

When I use conversation.process (because I also use GitHub - fabianosan/HomeAssistantAssist: Alexa Skill that integrates Home Assistant Assist or your preferred Generative AI via the conversation API and also allows you to open your favorite dashboard on Echo Show.), it ignores the automation.

So how can I use the trigger:

triggers:
  - trigger: conversation
    command:
      - Was siehst Du im Hof
      - Wer steht vor der HaustĂĽr
      - Was ist vor der HaustĂĽr
      - Wer ist im Hof
      - Siehst Du was im Hof
      - Wer steht im Hof

with action:

action: conversation.process
data:
  text: Wer ist im Hof
  language: de
  agent_id: conversation.azure_openai_conversation

Response over Chat (App, Browser,etc.) → working

init_options:
  start_stage: intent
  end_stage: intent
  input:
    text: Wer steht im Hof
  pipeline: ***
  conversation_id: null
stage: done
run:
  pipeline: ***
  language: de
  conversation_id: ***
  runner_data:
    stt_binary_handler_id: null
    timeout: 300
events:
  - type: run-start
    data:
      pipeline: ***
      language: de
      conversation_id: ***
      runner_data:
        stt_binary_handler_id: null
        timeout: 300
    timestamp: "2025-06-26T13:42:50.467398+00:00"
  - type: intent-start
    data:
      engine: conversation.azure_openai_conversation
      language: de-DE
      intent_input: Wer steht im Hof
      conversation_id: ***
      device_id: null
      prefer_local_intents: true
    timestamp: "2025-06-26T13:42:50.467427+00:00"
  - type: intent-end
    data:
      processed_locally: false
      intent_output:
        response:
          speech:
            plain:
              speech: Ich kann keine Person oder ein Tier in diesem Bild erkennen.
              extra_data: null
          card: {}
          language: "*"
          response_type: action_done
          data:
            targets: []
            success: []
            failed: []
        conversation_id: ***
        continue_conversation: false
    timestamp: "2025-06-26T13:42:52.500402+00:00"
  - type: run-end
    data: null
    timestamp: "2025-06-26T13:42:52.500469+00:00"
intent:
  engine: conversation.azure_openai_conversation
  language: de-DE
  intent_input: Wer steht im Hof
  conversation_id: ***
  device_id: null
  prefer_local_intents: true
  done: true
  processed_locally: false
  intent_output:
    response:
      speech:
        plain:
          speech: Ich kann keine Person oder ein Tier in diesem Bild erkennen.
          extra_data: null
      card: {}
      language: "*"
      response_type: action_done
      data:
        targets: []
        success: []
        failed: []
    conversation_id: ***
    continue_conversation: false

Response over conversation.process (Developer - Action or HomeAssistantAssit Alexa) → not working

response:
  speech:
    plain:
      speech: >-
        Im Hof gibt es aktuell keine Hinweise auf Bewegung oder Anwesenheit. Die
        Lichter im Hof sind aus. Es sind keine Sensoren aktiv, die eine Person
        im Hof erkennen wĂĽrden.
      extra_data: null
  card: {}
  language: de
  response_type: action_done
  data:
    targets: []
    success: []
    failed: []
conversation_id: ***
continue_conversation: false

Anybody a idea why chat (text / voice) and conversation.process different?

Thanks!

Assist works correctly with conversation.process and any other text data.

Show us the full code for your automation.

No Problem.

Automation:

alias: "LLM: Was siehst Du im Hof"
description: ""
triggers:
  - trigger: conversation
    command:
      - Was siehst Du im Hof
      - Wer steht vor der HaustĂĽr
      - Was ist vor der HaustĂĽr
      - Wer ist im Hof
      - Siehst Du was im Hof
      - Wer steht im Hof
conditions: []
actions:
  - action: llmvision.image_analyzer
    *** Shortened because irrelevant ***
    response_variable: response
  - set_conversation_response: "{{response.response_text}}"
mode: single

Call via Assist Web:

stage: done
run:
  pipeline: ***
  language: de
  conversation_id: ***
  runner_data:
    stt_binary_handler_id: null
    timeout: 300
events:
  - type: run-start
    data:
      pipeline: ***
      language: de
      conversation_id: ***
      runner_data:
        stt_binary_handler_id: null
        timeout: 300
    timestamp: "2025-06-27T06:30:14.715723+00:00"
  - type: intent-start
    data:
      engine: conversation.azure_openai_conversation
      language: de-DE
      intent_input: Wer ist im Hof
      conversation_id: ***
      device_id: null
      prefer_local_intents: true
    timestamp: "2025-06-27T06:30:14.715758+00:00"
  - type: intent-end
    data:
      processed_locally: false
      intent_output:
        response:
          speech:
            plain:
              speech: Ich kann keine Person oder ein Tier in dem Bild erkennen.
              extra_data: null
          card: {}
          language: "*"
          response_type: action_done
          data:
            targets: []
            success: []
            failed: []
        conversation_id: ***
        continue_conversation: false
    timestamp: "2025-06-27T06:30:17.457137+00:00"
  - type: run-end
    data: null
    timestamp: "2025-06-27T06:30:17.457198+00:00"
intent:
  engine: conversation.azure_openai_conversation
  language: de-DE
  intent_input: Wer ist im Hof
  conversation_id: ***
  device_id: null
  prefer_local_intents: true
  done: true
  processed_locally: false
  intent_output:
    response:
      speech:
        plain:
          speech: Ich kann keine Person oder ein Tier in dem Bild erkennen.
          extra_data: null
      card: {}
      language: "*"
      response_type: action_done
      data:
        targets: []
        success: []
        failed: []
    conversation_id: ***
    continue_conversation: false

= Working

Call via Action (Dev Console or via HomeAssistantAssist-AlexaSkill)

action: conversation.process
data:
  text: Wer ist im Hof
  language: de
  agent_id: conversation.azure_openai_conversation

Response:

response:
  speech:
    plain:
      speech: >-
        Es gibt keine Sensoren oder Hinweise darauf, dass aktuell jemand im Hof
        ist. Bewegungsmelder im Hofbereich melden nichts.
      extra_data: null
  card: {}
  language: de
  response_type: action_done
  data:
    targets: []
    success: []
    failed: []
conversation_id: ***
continue_conversation: false

= Wrong/differnet answer.

It doesn’t use the automation it use tool assist access.

So again:

Why if I call the sentence in Web or ios app it calls the automation and didn’t call the automation if I use conversation.process action?

Thanks again1

Consider the conversation.process action as a single request to the LLM. It does not interact with other objects unless you explicitly pass information through a variable or attach files. And does not run intents/automations

1 Like

Sorry again.

But I don’t understand why it works when I use the chat/voice assistant directly, but not via conversation which should be the same for my logic.

If I talk to the LLM via Webchat direct, it works. This is also a single request to LLM.

Why is this treated differently? Does anyone know?

Idea of a work around?

You may be confused by the naming, but these are completely different things. You specified phrases for the trigger, and when you say them, the automation is triggered.
The conversation.process action has nothing to do with this.

1 Like

The same question has already been asked here:

unfortunately without an answer.

If it doesn’t work, it would be great if someone had an idea of how I could still do the automation via:

Home Assistant Voice Assit :green_circle:- works well every call
Home Assistant Voice Assit Text Chat :green_circle:- works well every call
and
Home Assistant Assit (Alexa skill that accesses conversation.process) :red_circle:- it doesn’t work now

maybe someone an idea?

Could you provide a link to the instructions for this method that you used to configure it?

Sure:

Everything (lights, shades, asking for room temperature,…) is
working with this skill over conversation.process in the same way if I use
for example the Text Chat in the Home Assistant App (Assist).

But automations don’t work with this (but with for example the Text Chat in the Home Assistant App Assist).

So I search for a tip to trigger the same automation with the skill as with the Assist.

The skill call is here
HomeAssistantAssist/lambda/lambda_function.py at main · fabianosan/HomeAssistantAssist · GitHub

ha_api_url = "{}/api/conversation/process".format(home_assistant_url)

It is best to check with the authors of these integrations for information about custom integrations.

Please excuse me, but it is not a problem of the author of the skill,
but a friendly question to the forum, why an automation can be started via LLM Voice or Text Chat in HA, but can’t be start via conversation.process.

Maybe someone else in the forum has an idea how this can be solved to make an api call similar to conversation.process to start the automation with a sentence.

Since the problem does not work via HA → Dev → Actions → conversation.process , it is not a problem of the Alexa Skill or the HomaAssistantAssit author.

See manual:
Conversation - Home Assistant
Quote:
" The Conversation integration allows you to converse with Home Assistant. You can either converse by pressing the microphone in the frontend (supported browsers only (no iOS)) or by calling the conversation/process action with the transcribed text."

The instructions state that there is NO difference between frontend and action (as you say).
Again:
However, automations are addressed correct via frontend or microphone, but not via the action conversation.process.

Thank you.

The instructions state that there is NO difference between frontend and action

You are dealing with open source :wink:The information is not always up to date.

However, undocumented methods of interacting with the pipeline are best discussed with the author of the integration.

1 Like

Just forget about the custom skill.

Just try it out:

Pass a sentence via Developer Tools → Actions:

action: conversation.process
data:
  text: Execute my automation
  language: de
  agent_id: conversation.azure_openai_conversation

=
It will not execute automation.

Make the same with

It works.

I can’t see in your issue listing why it shouldn’t work or that the Home Assistant instructions wouldn’t be up to date.

This is not a list of issues, but a log of updates to the documentation page. It can be said that it has not been updated since 2023.
You do not want to accept the information at all; I have already explained everything to you in my previous answers.

1 Like