Conversation truncated :: conversation.process

Hi everyone,

I’ve recently started experimenting with the OpenAI/Google Generative AI integration to generate TTS announcements for my morning routine—things like the weather, calendar events, and similar updates.

While it generally works well, I’ve noticed that the responses can sometimes be too long and end up getting truncated. Is there a way to adjust the token limit to avoid this?

For now, I’ve prompted it to limit the replies to 400 characters, which works most of the time, but I’d prefer to have longer responses if possible.

Thanks in advance for any help!

P.S.: I hope I posted this in the right category. It’s the best match I could find.

Increase the number of response tokens in the configuration of the integration.

1 Like

Thanks a lot! It seems to work perfectly. I think I might have missed that the customizable options appear when you deselect the ‘Recommended settings’ box and validate (I’m in French, so language might be slightly different).

But just in case there are others like me who are wondering, I thought I’d leave this comment here.

Thanks again, it’s really appreciated!