Assist Pipeline executes without error but action does not happen

Testing my new assist pipeline I can successfully get “ok nabu” to turn on specific lights. But, if I try, for example, “ok nabu, turn on all lights” it says “done” but nothing actually happens. The ESP32Box is in the dining room, its assigned to the dining room area and the lights (4 of them) are all assigned to the dining room.

Looking at the raw debug log, i see the execution of the verbal command, but the data / targets/ success and failed lines are all blank.

conversation_id: null
device_id: null
prefer_local_intents: false
processed_locally: false
intent_output:
  response:
    speech:
      plain:
        speech: Okay.  Dining room lights on.
        extra_data: null
    card: {}
    language: en
    response_type: action_done
    data:
      targets: []
      success: []
      failed: []
  conversation_id: 01JFMASCFPP1KDDGY3HGXQ62A4

I cannot figure out where the problem lies. Any suggestions?

I have the same issue (honestly don’t know if I’ve ever actually seen a target in the logs) This is for successful as well as unsuccessful commands.

I have the same problem. I talk to Assist in Swedish on my apple watch, have a local whisper interpreting it, then send it to Google LLM for interpretation and lastly use google translate in Swedish for text to speech.

What happens is what i speak is interprated correctly, assist identifies the entity to control, and for some reason answers in English, but with the correct entity, like “Turning off Arbetrsrum ljusslinga” (a led strip in the study). However, nothing actually happens. The targets is empty.

intent:
  engine: conversation.google_generative_ai
  language: sv
  intent_input: " Stäng av arbetsrum Jusling"
  conversation_id: 01JN8WQVZKM8Q16JXMJK09Q2CG
  device_id: 12aaec246a3eb79a153c51c87b4365d2
  prefer_local_intents: false
  done: true
  processed_locally: false
  intent_output:
    response:
      speech:
        plain:
          speech: Turning off Arbetsrum ljusslinga.
          extra_data: null
      card: {}
      language: sv
      response_type: action_done
      data:
        targets: []
        success: []
        failed: []
    conversation_id: 01JN8WQVZKM8Q16JXMJK09Q2CG

Don’t use LLM, practice on the built-in agent first. This way you will see input errors.

I’ve got the same problem, the built-in agent works fine, it seems to be a problem isolated to the LLM.

I quite often get the response

The hallway lights are currently off. I will turn them on now. HassTurnOn('Hallway Lights', 'switch')

…in the conversation

posting my intent result for the thread:

intent:
  engine: conversation.azure_openai_conversation
  language: en
  intent_input: turn on the hallway lights
  conversation_id: null
  device_id: null
  prefer_local_intents: false
  done: true
  processed_locally: false
  intent_output:
    response:
      speech:
        plain:
          speech: |-
            The hallway lights are currently off. I will turn them on now. 

            HassTurnOn('Hallway Lights', 'switch')
          extra_data: null
      card: {}
      language: en
      response_type: action_done
      data:
        targets: []
        success: []
        failed: []
    conversation_id: 01JNEXVX675MJ7ZPV7NR6SG1WC