Conversation.process service call Multiple Generative AI Prompts

I would like to specify different LLM prompts and temperatures for each conversation.process call of the PaLM API conversation agent accessible through the Google Generative AI Conversation Integration.

Right now it appears that only one prompt and temperature can be configured for all conversation.process calls via the Google Generative AI Integration. This is configured at the level of the integration. I would like to configure it at the level of the conversation.process service call.

This would enable prompts appropriate in different contexts - different prompts for different scripts and automations. For example, I would like to be able to use one prompt to stylize routine TTS smart home announcements in the voice of Jarvis and also use another prompt to provide game day scores, summaries, and recaps at the conclusion of a sporting event.