It would be great to be able to specify the URL for the OpenAI integration, or at least be able to choose the model that’s used. I’ve integrated a voice assistant with OpenAI and i’ve only been testing it out for the last hour or two and my bill is already a couple dollars. i run models locally and would much prefer to use a local model. tools like LM studio let you run a bunch of different models and expose an api that is the same as OpenAI’s - all that would be needed is the ability to specify the URL and model name (and omit the API key). it would also be nice to be able to see the prompts sent to help with debugging.
i dug through the integration code a bit and it looks like it might be fairly easy to add some customization options. core/homeassistant/components/openai_conversation at 2024.6.2 · home-assistant/core · GitHub
i would be happy to help with this, though i’ve never contributed to HA before.