OK. Moving on.
Changed the OpenAI model to gpt-4o-mini, which has a higher TPM limit. (Apparently in English a token is about 4 characters.) Still using the default prompt.
This is much better, but still an issue.
- “Identify redundant input_boolean entities” produces an incomplete list of entities (none redundant) and some general comments on consolidation. It offers to review usage and propose changes.
- “Review usage of input_boolean entities” produces a TPM error.
- “Identify input_boolean entities not used in automations” also produces a TPM error.
I’m assuming that the errors are partly down to the lack of a prompt, so that I’m getting a large amount of LLM waffle.
Observations about usage:
The fact that the conversation is cleared when you navigate away from the web UI is becoming increasingly annoying. I find I often want to cut and paste things like entity IDs from elsewhere. You can do it in a browser, of course, with two windows open.
When there is an error it would be useful to be able to edit or resend the question. At the moment the field is cleared after each send.
And a question…
I have a number of tools set up for a voice AI along the lines described here:
If I include them in the prompt, will they work? They usually invlove running a script and the description fields in the script may contain extensions to the prompt.