First of all: great addition in the 2024.8 release to add function calling to Ollama! I can see there has been a lot of work put into evaluating the different models, that’s really great!
I am currently testing it by asking it to turn the lights on and off, and for me it is only able to run the first command I send to it, and subsequent messages fail.
This seems to be independent of the prompt and the max. number of history messages parameter (I set it to 0).
I have yet to inspect the Ollama logs in detail, but one entry caught my attention. This only appears on subsequent messages, not on the first command I send to the LLM.
level=DEBUGsource=prompt.go:51
msg=truncating input messages which exceed context
lengthtruncated=2
could this have anything to do with the function calls not being triggered by the LLM?
Does anyone have a similar experience?
Aside from this message, there is no error in the logs regarding entities not found or similar. The LLM just responds that it cannot fulfill the request.