Ollama integration (offcial one) How to send data to the LLM?

SUCCESS!! I was finally able to get the response_variable to the LLM voice assistant!

Thnx to Balloob on github providing me with the blueprints!! (Also thanks to Defes, for providing them!) Offcourse I didn’t really know how to implement the essence of the bleuprints into my code, so my good friend ChatGPT told me to use this (and it worked!):

alias: Check Memory
mode: single
sequence:
  - data:
      id: "{{ id }}"
      title: "{{ title }}"
      value: "{{ value }}"
      category: "{{ category }}"
      subcategory: "{{ subcategory }}"
      subsubcategory: "{{ subsubcategory }}"
      priority: "{{ priority }}"
    action: pyscript.search_memory
    response_variable: results
  - data:
      entity_id: input_text.llm_request
      value: "{{ ( results | string | trim | replace('\"', '') )[:255] }}"
    action: input_text.set_value
  - target:
      entity_id: notify.file
    data:
      message: |
        {% if results %}
          Memory check (notify.file) complete:
          {{ results }}
        {% else %}
          No data found.
        {% endif %}
    action: notify.send_message
  - stop: ""
    response_variable: results
description: >-
  This script checks or searches for memories based on the given title, value,
  category, subcategory, subsubcategory, and priority. It sends the results to
  the LLM through the conversation.process service and logs them in notify.file.

As you can see the response_variable is duplicated at the bottom together with a stop.

So, I now have a working pyscript that the LLM can use to store memory items in a SQLite database and now finally is also able to check their memories, super fascinating! If anyone is interested I could clean up the mess and share it?

4 Likes