Different Assistant answers with Voice vs text

I just got my Voice PE unit. Really impressive little box, and the setup process was slick.

I’m struggling with Assist results though, hoping someone can shed some light on this.

I have Assist working with an OpenAI conversation agent via the Extended OpenAI Conversation integration. I have this agent successfully using Google searches to answer questions (weather forecast, sports scores, etc…). That works well via the Assist window in the HA interface, using text queries.

I have the Voice PE setup to use the same Assistant (I only have one configured) with the OpenAI conversation agent. But, I can type the same question into the the web interface and speak it to the Voice PE and I get different results. The text interface will answer correctly, the Voice PE says that the search results didn’t include the information.

I can’t figure out why the results would be different to the same question just based on how it’s submitted. Does anyone have any suggestions for me?

Setup:
1 Assistant configured in HA -Conversation Agent: OpenAI via Extended Open AI Conversation integration

  • Speech to text: Faster-whisper
  • Text to speech: Piper

Extended Open AI Conversation:

  • OpenAI - gpt-40-mini model
  • Configured for Google search via example on the integration Github page
  • Use tools enabled

That is due to a parameter that is set on the LLM model. I can’t recall the name of the parameter because i don’t have access to my system right now. What is does is that it gives some randomness to the answers to make it feel more like a real person. If you set this parameter to 1 it will give each time the same answer. Is kind like a seed number for a randomizer.

I know for sure that if you run the LLM model locally you have controle over there parameters. ( i do this while running them in LMStudio)
Not sure if openAI allows this

Thank you! I set Top P to 1 and Temperature to 0.1 for the OpenAI agent. This seems to have fixed the issue.