This is it!
Now it works but not as well as google… I was expecting to continued conversation all the time, not at question only.
With the newer update, LLMs can detect questions and do this. All I did was change the AI prompt to end everything it says with a question mark. This triggers the VPE to listen for another response.
Then I realized it kept running in a loop after I was done. So I told it not to reply with a question mark at the end if I say “no,” “that’s it,” or “that’s all.” Game changer!
I am new to HA. I just received my HA Voice PE. Configured STT/TTS/LLM with “Extended OpenAI Conversation”. I don’t have the functionality discussed in this thread out of the box and I don’t know what I need to configure. The example with the “set timer” doesn’t work. The assistant does ask me for how long, but the conversation stopped at this point. When I restart it with the the wake up word and saying “10 seconds”, it seems to be lacking context.
If you’re just starting. Use builtin OpenAI integration… There’s nothing extended does that builtin can’t anymore except change endpoint. And it uses a different tools interface. It may also be complicating your issue.
So I strongly suggest the builtin first
Then read Friday’s Party (there’s a catch up post Near the end after you read the first post.)
Does buil-in OpenAI integration supports LM Studio (OpenAI-compatible API)? When I exchange messages in text chat everything works fine - it keeps asking follow up questions. However, with Nabu it stops after one iteration.
Ahhh change endpoint… Sorry. But my point stands. It handles tools and things differently than the builtin. You’ll probably have to take your question to an extended oai thread. Because most of us are having problems with our agents continuing the conversation every. Freaking. Time.
Edit : openllm conversation also apparently will work for lm studio…
Thanks, I guess I am missing something basic. Is the conversation continuation achieved by tool calling or its a property of HA stack? Again, everything works fine in the Assistant chat window. Nabu - voice conversation is where it fails. Any pointers to what I can check in the logs to check why Nabu shuts off after it plays back the response instead of keep listening for the next user input? I went over suggested posts - unfortunately none is talking about how to debug voice conversation.
It’s part of assist. But it also needed to be supported by the entire stack. Thus my question. Peel it back to as few variables as possible.
The voice pipeline debug tool is under settings> voice assistants > your assistant > [three dot menu] >debug
continue_conversation wasn’t set.
Even after switching to the built-in OpenAI it didn’t work out of the box. The conversation persisted but I had to invoke Nabu each time. ChatGPT suggested to add to the prompt:
If required information is missing (e.g., timer duration), do not call any tools.
Reply with only a short question ending with “?” (no extra sentences).
Indicate that a follow-up from the user is expected.
Now it works, the flag continue_conversation is being set to true. I have a working reference to continue. Thank you.
