Can an LLM Tool/Script trigger a call to another tool/script?

I’ve been scratching my head on this and I’m sort of falling on the side of “no”, at least with the built-in conversation agents, but I figured I’d ask in case I’m missing something.

The issue I have is one of efficiency when I have a script exposed to the LLM that is doing multiple things including providing responses back to the caller. I have a dozen or so of these, and the overall process of doing it is clunkier than I’d like.

To give a concrete example, one of the scripts is a “good night” routine – triggered via a VA-triggered LLM script execution when going to bed. That script does a bunch of things – turning off lights, adjusting climate control, etc. It also gives a weather forecast for the next day and any reminders of things the next day. Those are using templates to return the actual text to be read to the user, which isn’t the same behavior as asking the LLM for a weather forecast directly – where a script returns the forecast JSON and the LLM is interpreting it in a more naturalistic way than a deterministic script. The end result is saying “good night” often returns a different weather forecast than asking “what’s the weather tomorrow?”.

I can do a loopback call via conversation.process to ask the LLM for tomorrow’s weather, but that then adds an extra round-trip and that conversation isn’t in the context of the current one, so the response tends to not “fit” as smoothly.

So, I’m trying to figure out if there’s a mechanism in the response being sent back to the LLM to trigger another tool call – basically having a tool instruct the LLM to call another tool as part of the response. A function call in a programming sense. Or, alternately, a way to at least access the conversation id that triggered a script execution so I can provide it as a parameter to conversation.process so at least it’ll be in-context.

The same issue crops up any time I have an LLM script that is aggregating functions that I sometimes want to trigger by themselves. Given how common that must be, I can’t be the only one who has tried to figure out a more effcient way to do this.

You’re thinking of it incorrectly.

You have to give the llm a story to tell with tools

Wake > (grounding condition) > ask > decision > action.

In action it needs a clear path of things to do.

A one shot model (pre gpt4.1-mini etc.) gets one pass through.

So after the ask it plans work and can sequence tools to do the work but it cannot stop and it cannot go back. Once committed it’s done. It can

Turn on these lights and those lights

It may nof be able to tell

Turn on these and those but the other don’t turn those on if (complex results)

Enter a reasoner it can loop back and make decisions based on the results and do better pre planning.

But it fundamentally does not change the work flow above just that it gets more passes.

So then we need to add a tool to refresh status and reground for the next ask (this is probably your missing piece. It has to refresh state intentionally and act. (my install has an index tool and an entity reader for these purposes)

You CAN write tools that reference and call other tools (in fact I do and am building an entire event bus comms system between tools)

But you need a well defined way for the to interpret and operate.

So the short answer is kinda?

It can do it but probably not the way you’re thinking about it. (btw this problem is the fundamental problem I’m solving with Friday. Ans the next six or so posts I’ve planned will probably be very interesting to you because this ^)

As soon as I finish editing I’m describing her storage and why it was designed to solve THIS problem.

So my first question is what model is driving the bus? It VERY MUCH matters