This is it!
Now it works but not as well as google… I was expecting to continued conversation all the time, not at question only.
With the newer update, LLMs can detect questions and do this. All I did was change the AI prompt to end everything it says with a question mark. This triggers the VPE to listen for another response.
Then I realized it kept running in a loop after I was done. So I told it not to reply with a question mark at the end if I say “no,” “that’s it,” or “that’s all.” Game changer!