I think I tried all possible ways and workarounds to send data to the Ollama voice assistant.
Can someone please tell me how I can do it? I have a function I call that should output the result back to the LLM, because it can run the script itself.
I also tried using a sensor template, but while I can expose it, the Ollama voice assistant is not able to see the entity, and will make stuff up (hilarious, but not when you try to debug something).
I also tried using the conversation.process directly (it seems it does not see anything from that). I tried having the conversation.process run my intent to run the accompanied intent_script to run a speech command, but appearantly allthough it retrieves the intent function name, it gives an error, and I don’t even know if that would have solved anything.
Thank you so much!! I got it somewhat functional, however I have a question. Is there a way to direct the response that now goes to the (helper) input_text.llm_response to go to the actual voice conversation? It almost seems like I have a split personality like this.
It seems currently I have it setup so my voice assistant Ollama can run a script and that processes stuff, and then the output of that process is fed to this automation and into the llm_request. There comes the weird part, there seems to be a ‘split personality’ since there seems to be another Ollama entity that is replying in the llm_response to the content in the llm_request.
Ok, it seems it sends it to the wrong ‘instance’ of Ollama. The agent_id is fine, but it seems I can run multiple instances on the same model, there must be some kind of unique id of the conversation I have via the HA Mobile app, not sure if this could be a current conversation id, or something similar. Would really appreciate some help!
If anyone reads this, I was able to figure out that with setting up a response_variable I was able to get the return from the pyscript back into the script in Home Assistant. (without the need for a textfield workaround)
However, the Ollama LLM voice assistant is not able to ‘see’ the contents of the result.
(the result is parsed back, since I’m able to see it in my notify.file output)
It seems that the Ollama LLM voice assistant that I ask to run the function in Home Assistant is not able to see the contents of the input_text or a template sensor.
Again talking to myself, I finally found out that there was an issue with the response_variable not being returned to the LLM. Appareantly there was a topic, a bug report and it seems marked at *solved and the topic is closed.
However, I still am unable to get the response_variable to my LLM (and yes it contains the results as I can view them in my notify.file and output them to a input_text)
This is getting really disheartening, since every single thing or workaround I tried seems to end up in nothing, so far I learned:
Ollama:
-can’t view the response_variable (or it is a timing issue? It does say that it was a succes, but that the actual result is an empty object *which it is not)
-can only view the initial state of a input_text at the start of the conversation
-cannot run the get_state() command or anything similar, so it’s not able to ‘see’ what’s in a text_input or even a sensor template, beyond the initial state of the conversation
-same seems to apply form the sensor template, and it’s also not able to see any custom parameters from that entity.
-cannot see exposes automations
-says it can’t list entities?
-does not receive any information trying the conversation.process function (see code previously pasted) with the correct model ‘agent_id’, since it seems there are multiple ‘entities’ of this agent_id?
-is not able to receive more than 4 lines of text, because then my speech-to-text stops (would be nice if you could alter this amount), to clarify, I use the voice-assistant on my phone.
Do I keep talking to myself here, or should I just add stuff in GitHub? How can I move forward?
Can you post the latest version of your script/automation yaml?
My issue was that I renamed the script and also changed the script ID. Apparently there is a bug where if you rename a script, Assist LLMs may lose the ability to access the script as a tool even if it looks exposed in the UI.
Duplicating my script, exposing the new one, deleting the old one fixed my problem.
I should also note, prior to release 2024.12, LLM assistants were not able to see script responses or variables. So this is a new feature.
The biggest difference between our setups is that I am using OpenAI whereas you are using Ollama – this thoeretically shouldn’t matter if everything else is working correctly.
The last thing I’ll suggest is trying to get this simple blueprint working with your LLM. If it does work, reverse engineer and see if you can apply the logic to your usecase.
alias: Check Memory
mode: single
sequence:
- data:
id: "{{ id }}"
title: "{{ title }}"
value: "{{ value }}"
category: "{{ category }}"
subcategory: "{{ subcategory }}"
subsubcategory: "{{ subsubcategory }}"
priority: "{{ priority }}"
action: pyscript.search_memory
response_variable: results
- data:
entity_id: input_text.llm_request
value: "{{ ( results | string | trim | replace('\"', '') )[:255] }}"
action: input_text.set_value
- target:
entity_id: notify.file
data:
message: |
{% if results %}
Memory check (notify.file) complete:
{{ results }}
{% else %}
No data found.
{% endif %}
action: notify.send_message
description: >-
This script checks or searches for memories based on the given title, value,
category, subcategory, subsubcategory, and priority. It sends the results to
the LLM through the conversation.process service and logs them in notify.file.
This is what I use with a pyscript the output of the notify.file is something like this:
2024-12-15T13:41:17.172224+00:00 Memory check (notify.file) complete:
{‘results’: ‘445 How to Evolve as a Home Assistant AI This is a sample memory value Test Category Test Subcategory Test SubSubcategory High’}
SUCCESS!! I was finally able to get the response_variable to the LLM voice assistant!
Thnx to Balloob on github providing me with the blueprints!! (Also thanks to Defes, for providing them!) Offcourse I didn’t really know how to implement the essence of the bleuprints into my code, so my good friend ChatGPT told me to use this (and it worked!):
alias: Check Memory
mode: single
sequence:
- data:
id: "{{ id }}"
title: "{{ title }}"
value: "{{ value }}"
category: "{{ category }}"
subcategory: "{{ subcategory }}"
subsubcategory: "{{ subsubcategory }}"
priority: "{{ priority }}"
action: pyscript.search_memory
response_variable: results
- data:
entity_id: input_text.llm_request
value: "{{ ( results | string | trim | replace('\"', '') )[:255] }}"
action: input_text.set_value
- target:
entity_id: notify.file
data:
message: |
{% if results %}
Memory check (notify.file) complete:
{{ results }}
{% else %}
No data found.
{% endif %}
action: notify.send_message
- stop: ""
response_variable: results
description: >-
This script checks or searches for memories based on the given title, value,
category, subcategory, subsubcategory, and priority. It sends the results to
the LLM through the conversation.process service and logs them in notify.file.
As you can see the response_variable is duplicated at the bottom together with a stop.
So, I now have a working pyscript that the LLM can use to store memory items in a SQLite database and now finally is also able to check their memories, super fascinating! If anyone is interested I could clean up the mess and share it?