Add base url function to openai conversion for custom llm support

I have a ollama running on my server, it has openai compatible url functions. Simply change the AsyncOpenAI(base_url=‘http://localhost:11434/v1’, api_key=‘whatever’), it could use my ollama models running on local server. Two benefits come with this approach: a) my sensitive home data will never leave my local network; b) no api cost. Could you add a function that when openai conversion inits, I could define base_url ENV?

There is built in ollama integration Ollama - Home Assistant

Yes, but this integration can’t control my device, the openai integration can.

It will happen!