ChatGPT in Assist

Hello, I’ve added ChatGPT to Assist, and it’s working great for the most part! I’ve given her a personality through the Assist configuration, and that’s working great. However, I’m facing a challenge—her responses often get truncated around 704 characters. Since her personality is a bit chatty (which I like), we frequently hit that truncation point.

With her help, I sent a direct API call to OpenAI, and I found that the truncation isn’t happening on their end; it seems to be an issue within Home Assistant. In the chatgpt_context settings, there’s a pre-set maximum length of 2000 with no specified units (it could be characters, tokens, or pages—who knows?). However, we aren’t coming anywhere near that limit. When I attempt to change this value, it tells me the maximum is 256, which is rather unhelpful. Regardless, it doesn’t seem to be using that value anyway.

Does anyone have an idea on how to stop the truncation altogether, or at least make it significantly larger? Thanks!

I hope you don’t mind if I ask, where is this setting?

If you are using the OpenAI Integration, have you tried changing the “Maximum tokens to receive in reponse”?

Don’t mind at all … in the UI, under “Settings”, then “Devices and Services”, and “Helpers”, there’s a “chatgpt_context” helper. I’m not at all sure what it does, but click the three dots to the right, then the gear icon for settings, there you’ll find the setting I’m referring to, but it doesn’t seem to do anything.

Yes, I’m using the OpenAI integration, and what you suggest could be exactly what I’m looking for, but I can’t find that setting on the HA side. There is virtually no truncation coming from OpenAI since the 4.5 upgrade a couple of days ago, I tested it thoroughly.

Thanks for the response.

Umm … that “Helpers” page lists the helpers that the user has configured, right?
I have the openAI integration set up on a couple of systems, and I don’t have a chatgpt_context helper.

In your case, what is the “Type” of the chatgpt_context helper?

If I configured it, it was inadvertently. The type is “Input text”, I hadn’t even noticed that before and now realize that it probably has nothing to do with the received texts.

Can you tell me how to find the “Maximum tokens” setting that you mentioned?

Thanks

Yes, go to the OpenAI Integration and open it, hit “Configure” and untick the “Recommended model settings” then hit “Submit” (to see additional configs) and it will then show various configurations including the max tokens

1 Like

OMG, that did it … thank you so much. I was hesitant to ‘untick’ that setting because I thought it would force me into the “Templating” mode. I plan to use templating in the future, but I wasn’t ready to jump in.

Again, thank you … that was very helpful.