About making inexpensive models smarter by providing tools and context. (local models, gpt-5-mini, gpt-4.1-mini, gpt-4o-mini ...)

A little tip from my journey so far, about how to live debug which tools your AI executes:

In my scripts I add a logbook section as the first entry under sequences.
It prints a tool identifier text and the used parameters.

sequence:
  - action: logbook.log
    data:
      name: "LLM ENTITY INDEX: "
      message: "{{ operation, location, tags, details, state }}"
      entity_id: "{{ this.entity_id }}"
  - choose:
      - conditions:
      ...

A full script with the log section included can be seen here in my Music Search Script:

Then add all the scripts to an area called “Scripts”.
Now open the Logbook and select the Scripts area to filter the listed log entries.

When you now start to ask questions, you see the tool calls showing up live with the used parameters.
Way easier to follow than using the assist debug view.

It’s also easy to see this way, if the AI calls a tool 2 or 3 times until it gets the correct parameters.
(Which means you should optimize the description or the parameter names further to help the AI.)

1 Like