Using OpenAI, it takes about 20 seconds for VAPE to run a script. What am I doing wrong?

I have a HA script called “Let’s go to bed.” When I say, “Hey Jarvis, let’s go to bed” it takes 20-30 seconds for it to figure out what I mean. How can I help this to run faster?

We need to see the script. :grin:

1 Like

The script itself is fine, as I’ve been using it for years. My issue is that OpenAI takes so long to find and execute it. Am I missing something?

How can we tell?

1 Like

Sorry, Jack is correct.

We need to see everything you’re doing.

Maybe the script can be generating a crap ton of context data.

First we need to understand you’re dealing with cloud round trips. And with llm prob multiples.

So first what llm? What model?
What are you using for text to speech and speech to text (both also add to the chain)
And of course what is the script ACTUALLY doing

Need ALL of it to help you.

1 Like

I still fail to see how the actual script matters, as the LLM isn’t actually processing it, right? Either way, here’s the working script.

alias: Voice - Let's Go To Bed - Full LLM Script
sequence:
  - target:
      entity_id: input_button.lets_go_to_bed
    data: {}
    action: input_button.press
mode: single
icon: mdi:bed
description: Script to run at night, to set the house up to sleep.

And I’m using all the default/recommended settings for the OpenAI Conversation integration. As mentioned above, the command is, “Hey Jarvis, let’s go to bed.”

And one final thing, the debug output.

(it matters because it’s now entirely possible to

  1. send an output that instructs the llm to do another thing
  2. send a crapton of return data to pocess.)

Neither of those should be happening (based on looking at the script thanks) I suspect cloud round trip lags so we need the debug logs to find our what the relative times are.

1 Like

Well, it seems tonight that it can’t even figure out what I want it to do. =/

Ok that was the local handler. It failed processing locally but then didn’t appear to handoff for some reason…

While you’re troubleshooting llm you can turn of process locally to bypass that.

Any idea what I might change to start troubleshooting?

1 Like

Yeah I’d probably turn off local first for now. To reduce the possible options (prevent it from trying to satisfy locally and isolate to llm) fire it again and see what the debug log says.

We’re looking at the timestamps. You should be able to see which step takes longest.