Overview of the Issue:
When we include a function call (such as the one defined in the extended_openai_conversation integration) in natural language processing (NLP) workflow, you observe a processing delay of around 5 seconds. In contrast, without invoking the function call, the response time is roughly 0.9 seconds.
is the function calling async or sync ? How we can optimise this ?