Response by LLM Vision is shortened, why?

I am using th LLM Vision Integration with Open AI GPT-4o-mini to analyze a camera picture. This works fine but the response message in the answer variable seems to be shortened, the sentence is not complete. I am sending the result to my iphone and via mail, both the same: here an example:

In dem Bild ist eine Person zu sehen, die schwarze Kleidung mit gelben Akzenten trägt und eine Kappe aufhat. Es ist jedoch nicht eindeutig zu erkennen, ob es sich um einen Paketboten handelt, da keine spezifischen Merkmale oder

Does anyone know why this is always shortened ? There are no options to set for length.

Hi Simon Fuß,

There may be a character limit. Does it seem like you get only 255 characters?

Notepad++ says 228 characters, maybe theres a limit for app notifications but should not for mail.

You probably need to increase the max_token parameter in LLM Vision which will limit the number of tokens generated.
There is also a dedicated community for the integration if you need more help:

Yes thats it, that easy. Thank you !

1 Like