The continuation is a function of the llm needing a response. It throws the continued_conversation flag if it needs more info. Then the device responds.
Thjs has the net effect of if question continue… But it’s really if need more info, continue
So to make it ALWAYS continue you need to Artificially setup a condition in the llm where it always needs something.
Now this is where I stop you and say. It’s not at all practical.
You will have to shred that prompt to set that up. I’m currently considering of I COULD but even worse if you do I see about 97 different ways it break other functions.
‘Veronica’ (Friday in supervisor mode, I use her to help me analyze scripts and prompts) says this and yeh it’s pretty accurate…
“If you’re constantly triggering the continued_conversation flag, you’re training the model to stall—asking follow-up questions even when it doesn’t need to, instead of acting. That’s like running Friday in ‘perpetual curiosity’ mode. Charming for five minutes. Then a disaster.”
Your LLM will end up pretty dumb always asking questions and never actually doing anything.
Do I understand correctly it doesn’t work with HACS Extended OpenAI Conversation? If it should’t it totally doesn’t work for me, or do I do some configuration first to enable it?
What a missed opportunity. 90% of the people I know runs with OpenAI Conversation agent, because the default one in Home Assistant Cloud is stupid as phack.