I’ve been scratching my head on this and I’m sort of falling on the side of “no”, at least with the built-in conversation agents, but I figured I’d ask in case I’m missing something.
The issue I have is one of efficiency when I have a script exposed to the LLM that is doing multiple things including providing responses back to the caller. I have a dozen or so of these, and the overall process of doing it is clunkier than I’d like.
To give a concrete example, one of the scripts is a “good night” routine – triggered via a VA-triggered LLM script execution when going to bed. That script does a bunch of things – turning off lights, adjusting climate control, etc. It also gives a weather forecast for the next day and any reminders of things the next day. Those are using templates to return the actual text to be read to the user, which isn’t the same behavior as asking the LLM for a weather forecast directly – where a script returns the forecast JSON and the LLM is interpreting it in a more naturalistic way than a deterministic script. The end result is saying “good night” often returns a different weather forecast than asking “what’s the weather tomorrow?”.
I can do a loopback call via conversation.process to ask the LLM for tomorrow’s weather, but that then adds an extra round-trip and that conversation isn’t in the context of the current one, so the response tends to not “fit” as smoothly.
So, I’m trying to figure out if there’s a mechanism in the response being sent back to the LLM to trigger another tool call – basically having a tool instruct the LLM to call another tool as part of the response. A function call in a programming sense. Or, alternately, a way to at least access the conversation id that triggered a script execution so I can provide it as a parameter to conversation.process so at least it’ll be in-context.
The same issue crops up any time I have an LLM script that is aggregating functions that I sometimes want to trigger by themselves. Given how common that must be, I can’t be the only one who has tried to figure out a more effcient way to do this.