Hi all,
TLDR: AI Conversation agent instructions are the linchpin to our successful HA integrations. A hacker could quickly modify AI instructions to change system functionality without ever writing code…or even know how to code. Should we be considering how these instructions are kept, accessed, modified and/or transmitted?
Good AI conversation instructions are like having GPS turn-by-turn navigation when driving somewhere new. A bad set of instructions, or worse, no instructions are a bit like telling five different people to drive from point A to point B, 50 miles away. Each person is left to potentially interpret the route, timing and location in a completely different way.
The ABSOLUTE KEY to a good AI integration is creating a solid set of AI instructions in your conversation agent. Period. When an agent has not been provided specific instruction, I have been amazed at the depths which AI will dig into the system to accomplish the task set in front of it. The agent will absolutely scour every bit of code and every entity to find a working solution.
Likewise, the agent’s workflow is a bit like water. It follows the path of least resistance. If it does not have a good framework for accomplishing a task, AI integrations are trained to make educated guesses on how to proceed if not given a specific flow chart and error handling instructions. Therein lies the importance of limiting access to entities and scripts the AI agent does not need access to, but that is a different conversation.
It is for this reason that I chose such a lengthy background to state my biggest concern. Again, good AI instructions are key. Those instructions, with little manipulation or awareness from the owner, are and easy target for immense nefarious damage with little effort.
For this reason, is it maybe time that we consider how security is handled for these instructions?