About making inexpensive models smarter by providing tools and context. (local models, gpt-5-mini, gpt-4.1-mini, gpt-4o-mini ...)

Ok, something else I noticed and fits perfectly in this thread, but that I didn’t fix so far:

The LLM fails amazingly in listing the state of multiple entities.
Like Which lights in the living room are turned on.
If I tell it (after a wrong answer) that this isn’t true and that it should take a close look on all devices and their states, then it can provide the correct answer most of the time.
But on the first try it fails far too often.

This is really something I didn’t expect, as the entity data, their status and the room are shared by Home Assistant with the LLM.
I have A LOT entities in my Home Assistant installation, and we have a lot ambient lights in our room that get automatically activated when one of the main lights in the room is turned on.

But still, this seems like a simple and common task for the LLM.

No idea how the data is provided to the assistant, but is it in such a bad way, that we really need a tool to fetch and filter (type, room, status, maybe tags, …) entities for easier access? :face_with_monocle:

Same happens with open windows or other things where it has to find the correct entities out of a large list and then filter them by status / room.

It often doesn’t only miss some of the devices, but even mix in devices from other rooms.
So my feeling is really that it get confused and could need some help…

edit:
I also tested it with better/more expensive models than my default gtp-4o-mini.
They sometimes provide the answer more correctly.
But they still fail from time to time.