OpenAI Integration Update to Support gpt-3.5-turbo (ChatGPT API)

This was announced just recently - the pricing is much more affordable than davinci and the use case seems geared directly at the Assist/Conversation use case.

Chat completion - OpenAI API

The feature request is to update our existing OpenAI Conversation integration to support these new models.

The API interface is different for ChatCompletions vs. regular Completions - so it may take a rethinking of the current UI for configuration of the model used and available parameters.

Would love to see this integrated as the cost/token is a 10x improvement over davinci

(would also like to see support for a self-hosted GPT-NeoX-20B model from EleutherAI but that is a different conversation :wink:)

Just an observation here. The 10x lower cost of gpt-3.5-turbo is a game changer. I would guess that the majority of users would prefer to have their H-A chatbot powered via this API rather than use the internally developed H-A assistant. Of course, some would want to keep their data private and offline.

Personally, once this is working I would want to have an Alexa wrapper to use it by voice. Ultimately one would then depart from Alexa/Google by combining Whisper and ChatGPT to create an unbelievably good “Home Assistant Assistant”.

13 Likes

Absolutely.

The usage of Plugins with Home Assistant may even give the regular ChatGPT web terminal capabilities actioning across your HA instance, let alone native integration with Whisper across a home. Google Assistant V2 removed text dialogue options (killed my personal project to integrate HA and OpenAI with Google Assistant), but this might re-open a feature route purely with HA and OpenAI

1 Like

Great idea with the Plugins. (Man, things are moving quickly.) In particular, I found myself hitting the max number of tokens easily when feeding the prompt template with my entire home’s entities and states. To feed GPT only the relevant data from HA, I would imagine a ChatGPT plugin paired with a dedicated HA integration to return relevant data to GPT. ChatGPT would formulate a query for any entities, services etc. relevant to your question; your local HA instance would then do a fuzzy search and return the matches to GPT to finish answering your question. It could also return your current preference as to allowing unprotected AI executions, and if so then GPT would double check with you before performing the real-world action.

1 Like

This was apparently just added in the last couple of days, I’m glad to see! I was poking around the github repo and noticed the change, so I reinstalled and can see that yes, it is using 3.5 now.

Was wondering how difficult it will be to switch to the 4.0 beta api? I have that available to test with and would love to do so.

My use case is perhaps unusual for the ChatGPT-HA integration. I am not so much interested in getting it to control my house (yet) but more interested to interact with it outside of the ChatGPT website. Pulling up the HA gui on my phone and being able to talk to my prompted version of ChatGPT is very useful.

Is there a forum other than this one you folks are working on development for this project? I would love to join. I’m in devops and I work in AI, so this is very interesting to me and part of my daily life. :slight_smile:

And of course, I want my smarthome to be the SMARTEST it can be. :slight_smile:
Thanks for any replies.
-Jim

1 Like

I am also curious why, after a few questions, I start getting errors because I’m out of tokens. When this happens, it’s because I have asked a complex question, or because I have posted several followups. At this point, even saying “hi” to the app gives the same error and all I can do is either wait or reload the integration to sometimes get back under the 4k token limit.

Why is my prompt/query getting longer and longer each time I interact with the app until it fails and I have to restart it?

Any response appreciated!
-Jim

Yesterday I created HA tool for langchain as an experiment, bot was able to find devices and entities with states, and was able to call services, but it was consuming my money like a black hole.

MORNING UPDATE:
I’ve spent the entire night having fun, and I’m slowly discovering that questions about the state of entities and devices, or invoking services, would only make sense with a continuous stream of data. We would need something like Google STT or Whisper working in real-time (and preferably locally and able to distinguish different voices), then ChatGPT could pick up the context from our statements. It works - I asked ChatGPT to be a smart home assistant, and it was great at distinguishing what in the conversation pertained to devices and what was in another context, and used commands correctly. It could literally pick out from a sample conversation that it was too cold and should turn on the heating, even though that sentence was just thrown into a dinner conversation.

We could also use geolocation to create automations, for example, “remind me to buy potatoes when I’m at the store,” and by having data on locations within a 30-meter radius of our position, for example, using Everpass, ChatGPT could send such a reminder, and it works quite well.

However, all this fun has cost me quite a bit. The development itself consumed a lot of money, and sending all that data and checking it in real-time eats tokens like the Cookie Monster. Add to that the cost of STT and other services, unless we want to play with Alexa and use some hotword detection mechanism, but then it becomes annoying. Additionally, there’s the language overhead - I used Polish, and the bot often confuses entities - either it will be more or less a lucky hit, or it will devour tokens. Here, a difference arises between GPT-4 and GPT-3.5. GPT-4 is almost perfect but monstrously expensive and too slow to be home assistant, while the GPT-3.5-turbo assistant must be heavily optimized and manually updated with each change, and it is still too slow. Or, we would need to create a tool that can handle it well.

For your information, I didn’t use aliases or any HA chat mechanisms.

IMO, the technology offers incredible possibilities, but its price and availability are still blockers for such applications - this will likely change by the end of the year.

I used langchain and created tools based on langchain/tools , similar to plugins , but I only scratched the tip of the iceberg of possibilities.

5 Likes

maybe llama alpaca that are local AFAIK

1 Like

any model is suitable, but trained on the Home assistant, it will be more effective.

@wojciech6789 It’s very cool that you got something working with langchain. Could you share more details? Did GPT actually figure out its own calls to the home assistant API? Did you try to use Pinecone with it? Regarding the cost, although you say that using a wake word is annoying, this is what people are used to for voice assistants, so I think it is at least good for now.
I think this whole thing needs to be vigorously discussed and different ideas tried out, because Home Assistant now has a unique opportunity to give us an open source JARVIS with more awareness & integration than anything else. And you know it’s happening soon: just look at Andrej Karpathy’s bio line on twitter.

I was able to setup an app to send audio over the websockets to Home-Assistant. Using Porcupine to detect a wake word and send the audio to a voice assistant pipeline.

Currently I am testing with a raspberry pi, USB microphone and the OpenAI conversation integration. You can setup the prompt for OpenAI to get information on people in your household and have a voice pipeline speak something clever.

2 Likes