I’ve added OpenAI as an an integration in home assistant and set up an agent to work with that. I’ve got an automation that passes a prompt and bunch of data to the conversation.process prompt which then generates a response and sometimes sends a notification to my phone depending on the result.
I’d love to be able to “continue” the conversation each time the automation is run, so I can simply pass the latest update on the data to the conversation.process prompt rather than re-hashing the whole explanation to the agent of what I want it to do. This would in theory enable it to make comparisons to past input rather than starting fresh each time.
Looks like I get a conversation_id back from the conversation.process action but I don’t seem to be able to pass this in to conversation.process the next time I run the event. The call throws an error saying it doesn’t accept the argument and it’s also not listed in the parameters in the documentation.
Is what I’m trying to do even possible? I feel like if I could get it to work, not only would it save me money on token usage for the size of the request but it could turn into quite a smart assistant for me.
Im not this far yet, with continueing conversation. I do wish to do so eventually. What I do have is a script that allows me to use an Assist sentence to ask OpenAI a question:
“What is {question}”
The question wildcard is passed to openai.
What I did is store the response in a sensor, since the response cant access the data directly.
You could try that and start the follow-up conversation with the text stored there?
I do hope (and think) something easier would be available in the future.
Yeah I currently store the last given response in a helper and then in the next run let it know as part of the request what the last response was. But ideally I would have a continuous conversation so that it can learn patterns/habits over time etc and reference back to data older than 30 minutes (which is how often the script is running).
Does seem like its maybe not possible for now but hopefully in the future!
Exactly. This is what this custom component is doing
One of the options is how long to keep the conversation (context threshold), so you can switch window and continue for example back in 10 mins exactly where you ended with conversation.
Yeah, that is integration setting, not the Assist feature.
All this stuff with LLM is more a gimmick than real use case. I won’t let text generator to rule my home till I’m sure 100% that it won’t hallucinate and unlock my front door by itself.
I added this ability to my Assist, alongside the Who is, Where is, When is and couple others general questions. What I noticed is external conversation agents all use non-null conversation ID, so it can be used for continuous conversation. Would be cool tho to have Assist to start next session without explicit user prompt (e.g. without wake-word), if previous response required additional info.