OpenAssist - ChatGPT Can Now Control Entities!

Hi, just wanted to ask if this should work with the latest beta Home Assistant 2023.10.0b6? I wasn’t able to get it running. I followed the guide, but as soon I’m adding these lines to my configuration.yaml

openassist:
  openai_key: "sk-8skkvE...cBCP"
  pinecone_key: "e28...f27" 
  pinecone_env: "gcp-starter" 

the configuration validation fails with the following message:

Integration error: openassist - Requirements for openassist not found: [‘PyYAML==6.0’].

Is there anything I could do to solve that issue?

Thanks!

Follow this Link to the issue on github. You can change the pyyaml version as described there and I got it working quickly

1 Like

Thanks a lot, that fixed it for me!

Hi !
I’m trying to set up OpenAssist. I managed to get my pinecone index built (and it shows correctly in the pinecone GUI with my exposed entities and all) but when I try to fire a query I get this in my logs :

Unexpected response structure: {'context': {'db': 'mindsdb'}, 'error_code': 0, 'error_message': "Can't find view 'gpt4hass' in project 'mindsdb'", 'type': 'error'}

And also this :

No message to send
No response to execute

My mindsdb seems to be correctly set up, the model has been created and all, but I don’t understand why it doesn’t seem to reach it

What did you name your mindsdb model? You will need to change it if you did not also use “gpt4hass” like the OP did. For example when i set up my mindsdb I used “gpt4hassio”, so in my config yaml i have:

mindsdb_model: “gpt4hassio”

instead of

mindsdb_model: “gpt4hass”

@El_Pollo_Loco thanks for your reply !
It’s the first thing I checked. I have gpt4hass in the mindsdb editor and also in my sensor yaml file.
Really don’t know what’s wrong, I wish I could output the exact curl command (or other) to check how it’s trying to connect. Maybe there’s an issue with the password ? Should it be url_encoded if it has special characters ?

Nevermind. I think I found the issue. I’m feeling a bit stupid in this one :smiley:
I seem to have forgotten to validate my email… didn’t see the link in my mailbox (realized it when I tried the “login” endpoint with postman, I had the response “active : false”)
Now I’m stuck with error messages when I try to resend the activation email haha, gotta wait it out (contacted the support)

1 Like

Is there a way to hijack the /api/conversation call and use this to reply instead of assist? I have tried via wildcards, but it doesn’t seem to work

1 Like

It’s always the little things, glad you found the issue. Hope you can get it running soon as it really is fun to play with. Thanks again to OP Hassassistant for putting this together. Would be great to integrate this into Assist somehow. You can really get it to do a lot once you modify the prompt text in the config.

2 Likes

Hey again ! Turns out it was absolutely not the email confirmation that was the issue !
mindsdb have stopped the free support for openAI and you now have to create the model with the api_key from openAI
it looks like this :

CREATE  MODEL mindsdb.gpt4hass
PREDICT response
USING
  engine  =  'openai',
  max_tokens =  2000,
  model_name =  'gpt-4',
  api_key = 'your_openAI_api_key_here',
  prompt_template =  '{{text}}';

in case anyone encounters the same issue :slight_smile:

1 Like

Hello! I’m curious, wouldn’t it also work to use the Home Assistant agent like an add-on for the OpenAI conversation agent?

In effect, you would prompt the OpenAI agent with something like this: “You must use the Home Assistant service for all requests/inquiries related to home control. To use the service, start your response with the ‘at’ symbol (@). Your entire response will be redirected to the Home Assistant agent and you will receive its response, after which you will have an opportunity to respond to the user.”

For example, you might query your OpenAI agent with “Tell me a joke and turn on the living room lights”. The OpenAI agent would respond with “@Turn on the living room lights” → the Home Assistant agent would receive the query “Turn on the living room lights” → the OpenAI agent would receive the response (and be able to see if it was successful or not) and would respond with a hilarious joke.

I can see various issues with this setup. For example, the OpenAI agent (like anybody) might use unsupported syntax and get an error. But this could be resolved with a longer prompt or more careful user-provided syntax. Also, I’m not sure how to “intercept”* the OpenAI conversation agent’s response and redirect it to Home Assistant, nor how to get the OpenAI’s second response to come through the same channel. But I imagine it can be done by matching the event context and inserting an event with the same context ID, or something like that.
*and maybe you want to be able to see what OpenAI is asking HomeAssistant to do

However, the advantages are that it’s a much simpler implementation and will only get better as the Home Assistant agent is updated and improved. It should be cheaper as well because it doesn’t rely on sending a ton of context to OpenAI - at most, it needs to know the quirks of the Home Assistant agent.

In addition, you could have the request go first through the Home Assistant agent (so it can resolve simple requests quickly) and only forward it to OpenAI if you get an error response.

Have you considered an approach like this?

Hi, struggle to set up this solution as MindsDB is showing up this error:

Can’t find integration_record for handler ‘openai’

I tried to add the OpenAI api key as well but still getting the error…

1 Like

I’m doing that but i’m getting "Can’t find integration_record for handler ‘openai’
" with or without api_key…

I have tthe same problem :sob:

I think the days of free MindsDB access came to an end

1 Like

is Supabase.com an option here…?

is there a alternative? Or can we cut out MindsDB as a middlemen?

Would it be possible to use this extension with a local vector database instead of a paid service? I’ve created something you can host locally: GitHub - ssuukk/MaxVector: A locally running vector database with GraphQL and REST API, with built-in methods to obtain embeddings from OpenAI and other services.

(Of course local postgress would probably also be enough)

Would it work to just run the self-hosted version?

Edit: Well I guess the point of using MindsDB at all was the free gpt4. Like @Agentken suggested, I guess it would be possible to cut out MindsDB completely and just use Pinecone + OpenAI?