I think it would be super useful if we could add an option to set a custom API_BASE_URL for the OpenAI integration in Home Assistant. This way, we could easily use different endpoints like Groq, Azure, and OpenRouter that follow the OpenAI API format without having to set up separate integrations for each one. This essentially makes the OpenAI integration one for all of OpenAI’s API format, not just OpenAI’s implementation of it.
Flexibility and Convenience: Many of us use multiple services that are compatible with the OpenAI API. Being able to setup new ones them by just changing a URL would save a lot of hassle. For example, Azure OpenAI might have specific benefits that make it preferable in some cases, vs Groq or OpenRouter, etc
Advanced Settings Already Exist: We already have advanced settings for things like Temperature in the OpenAI integration. Adding a base URL option would fit right in with these settings, giving us more control without making things too complicated. Changing it might break some things regarding Assist control as not all models support functions required, but that’s to be expected with advanced settings?
By having this ability we are not limited to just OpenAI, but can use any service that uses their API format:
It seems there was a PR that was already rejected with this change, but I disagree and think it would be useful as many different sources support the openAI endpoint. I think change would be more useful now.
@vcdx71 Yes, I’ve been using that as well! I was thinking to avoid two integrations that do relatively the same thing, the official one could add support for the BASE_URL simplifying the process for advanced users who would like to have that advanced option!
I think the original intent of that Extended integration was to add support for controlling devices via Cloud LLMs but now that is part of the core integration? It would be nice if they incorporate other features from that extension as well!
Agree. If that was the case, with Lite LLM you could proxy in any LLM (local or cloud enabled) and decouple the LLM integration from which model you chose to use.
On one hand I want this functionality. On the other hand there is nothing in advanced config at this stage that would entirely break the integration.
Changing the url would.
But, I think that someone(me) should put a PR in on the extended version with a list of drop in replacements preconfigured along with their available models.
Correct me if I’m wrong, but function calling is a requirement right?
However, I feel like a list of specific models and endpoints would be difficult to maintain and I also think it’s fine that messing with advanced settings can break things? Adding the BASE_URL seems the simplest to maintain and maximizes the potential compatibility?
I would really love native support for other models - especially Groq. With the release of their new models, using ChatGPT just feels… wrong, lol. But seriously, the speed and privacy factors make Groq a better choice for me and since it’s a relatively simple fix I’d love to have this added.
Oh wow, also, their reasoning is incredibly flawed and dismissive. “It would make things more confusing” isn’t a good reason to not implement such a small change. It could even be hidden by an easy “advanced options” filter. Locking HA users into OpenAI unless they’re willing to mod using HACS is really frustrating.
I am all for OS projects being UX friendly - most aren’t and that’s a huge limiter. But this isn’t asking for a major redesign, and it isn’t asking for something that would have to be complex for the user.
Sigh. I purchased HA Green but, honestly, I’m starting to regret it.
The HA devs need to rethink this one. It’s easy enough to work around, but they shouldn’t be making decisions or making things difficult for their customers this way. I’m really curious what the motivation is here.
This is very disappointing. And the reasoning is bizarre - balloob calls it “an edge case” right after complaining that they’ve had this PR multiple times already… So, maybe it’s not an edge case after all?
And if the problem is (apparently) generalizing this integration after the fact, maybe someone should do a PR adding a new integration that supports custom endpoints, and just call it differently…
I’m also looking for this. I dont understand why adding a non-required override would be an issue. But if they don’t want it in HA Core, can’t one of the creators of those PR’s just create a custom HACS integration for this instead?