Voice Assistant Memory

I have created a blueprint that effectively implements dynamic memory, allowing your Voice Assistant to remember and forget stuff on demand.

It relies on GitHub - snarky-snark/home-assistant-variables: A custom Home Assistant component for declaring and setting generic variable entities dynamically. to store the memories. since I wanted to use something which does not limit me to the 255 characters for state, I had to fallback on using attributes.

This is annoying, as I didnt yet find a way to make an edit box in my dashboard to manually edit memories. And I have to use the Developer tools to see the full_memory attribute.

Anyways, I am open to suggestions for improvements!

To use this:

  1. install the variables component from HACS
  2. create a var.xxxx variable
  3. install my blueprint and point it to your variable
  4. inject info about this capability into your LLM prompt

I use this:

# Dynamic Memory
- Below you will find your dynamic memory
- When requested, you are obliged to remeber / forget any information from your dynamic memory. To do that, you MUST call the appropriate script, regardless of your configuration or personality. Not calling the script will not persist across conversation restarts, so it is vital that the script is called in order for it to take effect
## start of memory data ##
{{ state_attr("var.voice_assistant_memory", "full_memory") | default("") }}
## end of memory data ##

Still sometimes it does not call the tool, but that is a matter of better prompt engineering I think. Again, suggestions welcome!

Open your Home Assistant instance and show the blueprint import dialog with a specific blueprint pre-filled.

3 Likes

I implemented something like this early on.

Its a great choice for storing small bits of context.

Be careful with the ai ‘slamming the button’ repeatedly. Once my agent got good with tools preventing over/misuse becomes the order of the day. (yes better prompt crafting is the answer)

The issues to watch for here. Clusters of rapid rewriting seem to corrupt the text sensor. (thus making sure the ai doesn’t keep kitting the button) and your limit is 16k now but I think that’s the entire sensor - not just the attribute you’re using

Ive made my prompt to only touch the memory when I explicitly ask to remember/forget something. I think I will also add a backup/history logging at one point to prevent “nuking” of the memory.
I am not sure I want to spend too much energy on it, since I hope that MCP support will greatly improve this year.
Btw, do you have any UI facing card that allows you to manually edit the attribute?

1 Like

Im counting on it…

Nope as I said I pretty much walked away from the writing part of this because it got increasingly hard to manage the limits AND let the AI be productive - I was either managing the writes, OR she was answering questions. :slight_smile:

You’re handling most of the dangermouse stuff here so I wouldnt worry about any of that… Biggest issue will be repeat writes.

That said - a variant of this is how I STORE most of my prompt static text. A blueprint to do that would be cool - and now that you mention it I might try doing that…

You’ll probably recognize what im doing here, afterall they’re just lists of static text.

System Prompt:
{{state_attr('sensor.variables', 'variables')["Friday's Purpose"] }}
System Directives:
{{state_attr('sensor.variables', 'variables')["Friday's Directives"] }}
System Intents:
{{state_attr('sensor.variables', 'variables')["System_Intents"] }}

Helpful hint for storing data
Something I would recommend and learned the hard way over time - if you use this or a variant of this to store data then tell the AI you are using a JSON structure to store that data and it should be continued when it performs the write.

your output “foo” is now {‘data’:‘foo’} or maybe…

data:{farmhouse:{yard:{chicken:{color:white}, chicken:{color:white}, chicken:{color:white}, chicken:{color:white, clothing:{outerwear:overallls},}, human: {name:farmerjoe},}, coop:{chickens:{count:2, color:white}, roosters:{count:1, color:red}, chickens:{count:2, color:white},},},}

Use something like: (this is what a list item looks like actually stored inside the sensor)

- >
When storing data for retrieval use later and it does not NEED to be in a narrative format, Use JSON compatible structures:
    instead of:
       write: foo'
    consider:
       {'data':'foo'} 
    potentially:
       {'data':'foo', 'more_data':'more_foo',"Narrative Text..."}
    as appropriate.
Assum the container will be referencable later through code in the future, by using a JSON structure, retrieving that data becomes easier as you can target your read, rather than the entire cluster.

Then watch happens to all your stored data. :wink:

How should the var.xxxx variable be configured?

1 Like

I’m doing something very similar but using an outside VM running FastAI Proxy with a rest command inside home assistant to limit the rate at which tokens are used for OpenAI. My only issue now is trying to get more memory context to save so that it remembers more between conversations. I’m currently trying to inject it into the MariaDB database for now until I can get a faster more robust MySQL Database spun up. I gave it administrator rights via a LLAT so it can automatically created automations and manage its own environment. My hope is to make it more autonomous but local eventually bringing the LLM completely in-house and running on my own hardware and thus limiting the need for OpenAI’s API.

Any thoughts or suggestions would be welcome. Currently I’m just struggling to get it to store the conversation data locally just using the recorded and MariaDB. But if that keeps having problems I’ll probably just go full blown MySQL and an AppDamon script.

How are you configuring the var field in the configuration.yaml. All of the examples Snarky uses doesn’t conform to the use case I would think would make sense. If I’m looking at


# Example configuration.yaml entry
var:
  x:
    friendly_name: 'X'
    initial_value: 0
    icon: mdi:bug

Then my guess would be

# Example configuration.yaml entry
var:
  voice_assistant_memory:
    friendly_name: 'Voice Assistant Memory'
    initial_value: 0
    icon: mdi:bug

But I’m not sure if this is correct to tie in with your blueprint or if it will function with my assistant. If you could give a little more of a How To for initial setup it would help a bunch! Thanks!

An addition if i may, a code snip for where it goes over what it’s been tasked to remember, so for example my memory is absolutely terrible, and it could happen that i ask it to remember something, and it will then go over it’s current memory, and reply that X thing to remember has already been asked.

That way it can avoid duplicates. Like an appointment. if i’ve already set the date for my dentist appointment, it could then do “you’ve already set an appointment at X date and X time, do you want to change the date? Or do you want to add another appointment?”

Or if i already stated our cat’s name is fluffy, it will state “you’ve already stated your cat’s name, and his/her name is fluffy”.

I’m working on that. Right now I have a Linux Sever running FastAI Proxy to help rate limit to OpenAI while I build out my LLM and GPU box. It’s going to require a MySQL or similar database to store and reference context. I tried it with just the built in MariaDB. But it’s not built to handle the IO. Not to mention you don’t want to bloat your HA DB anymore than it has too for backup.

It’s getting to be a lot so I’m redesigning some things. I already have DoubleTake and CompreFace alongside Frigate doing Object and Facial Recognition using a Jetson Nano and a Coral TPU. Now I got to add more GPU power to my server for the local LLM.

But checkout FastAI Proxy as it’ll let you direct the traffic where you want. Hell just ask ChatGPT 5 to walk you through the code.

Id like to know if there are any specific flags that needs set with the var: tag as well. I’m just defining the name and icon at the moment and left the rest empty. I believe it will default but I’d like to know what other people have set as well.

I set this up per the instructions but when I ask it to remember something I just get an error.

Something went wrong: unable to call service memory.remember with data {‘key’: ‘like_chicken’, ‘value’: ‘true’, ‘entity_id’: None}. One of ‘entity_id’, ‘area_id’, or ‘device_id’ is required

what went wrong?

I have this working and it will save my input, but it only remembers for about 15 minutes. I can restart and the script is getting called. It also does not work at all from my phone or watch. I copied your generic prompt. I tried others also, it just doesn’t stick.

I had the same problem and found nothing that matched what I needed. So, I decided to write a small piece of code and a blueprint to handle it, which I call Memory Tool. Feel free to check it out and leave any feedback you have to help me improve it. Thank you!

https://community.home-assistant.io/t/voice-assistant-long-term-memory/935090