Response by LLM Vision is shortened, why?

You probably need to increase the max_token parameter in LLM Vision which will limit the number of tokens generated.
There is also a dedicated community for the integration if you need more help: