PE hijacking AI generated TTS from script

I have had this script using ChatGPT for months, before I bought a PE. It has worked properly up until HA 2025.2 (I just updated this past weekend to 2025.2.1). Now all of a sudden I cannot use the script as the PE (in the living room) hijacks the TTS output meant for the bedroom Sonos speaker. The only speech I get from the bedroom speaker is “Done”, which probably relates to part if my ChatGPT-assistant prompt ("if you complete a task successfully just reply with “done”).

Here is the relevant part of the script that gets hijacked:

  - action: conversation.process
    metadata: {}
    data:
      agent_id: conversation.chatgpt
      text: >-
        "You are waking Jim up, greet him in a funny way. The time and date is
        {{ now().strftime('%A %B %d') }}, tell me
        the date and time. What is today's forecast? Also tell me that today is
        {{ state_attr( 'calendar.dishes', 'message') }}, Tell me a funny fact
        about this day in history. Finish all this by telling me that I need to
        get out of bed in a funny way."
    response_variable: reply
  - action: tts.cloud_say
    metadata: {}
    data:
      cache: false
      entity_id: media_player.bedroom
      language: en-GB
      message: "{{ reply.response.speech.plain.speech }}"

After a little bit of testing I am sure this is indeed a bug in HA 2025.2. Easy to prove if you are using ChatGPT as your AI assistant with a HA-VPE. Just plug the first part of the script above (the conversation.process) into Dev Tools / Actions, click “Perform Action” and It will output it automatically to the PE. This is without the second output part of the script that targets the proper speaker.

Only way I could get this to work normally again was to add a second conversation agent (Google Generative AI in my case) and change the script to use that one instead. I don’t like the responses from Google, I’d like to be able to run the script the way it has worked for at least 3 months, but it seems HA will not let me run ChatGPT as both the AI assistant AND within a script/automation.

What if you have two identical Chatgpt pipes. Set it up a second time and call the second one…

Yup, this works. Simple solution that my brain could not even contemplate.

1 Like

Cool! Ive been in the multi-agent space for the last few days - I have 3 copies of my default LLM setup with slightly different prompts.

In the future I expect we will have a ‘default’ LLM for triage and customer response that calls off to tools and agents to work on its behalf so I’ve been building towards that.

FWIW …
I have two Assist Assistants/instances setup, one is the default “Home Assistant” Assistant that I use just for local only testing, and another Assistant I also named Chatgpt that uses OpenAI/Chatgpt as the CA.

Running the action above with the VAPE attached to either Assistance instance didn’t cause any output to the VAPE.