Friday's Party: Creating a Private, Agentic AI using Voice Assistant tools

Protocols don’t change. That isn’t the real issue. Use. The more it can do the more you want to do. Tokens cost money.

Also yes thays the plan. HA is her home but it’s not a limitation it’s just her way of getting to everything she needs to get her job done.

And that’s why you want to go local eventually; to save money, though you’re going to have to invest in beefy hardware to make that transition.

1 Like

Exactly during dev or for short bursts in an absorb it. Heck I might even put up a buy me coffee. But the real answer is not to care by selfhostiing that. And only being concerned with the box and its power needs.

A $3000 digits box is incredibly expensive. But I think you see the potential here for use.

And everything burns toks. Everything. Checking the camera at the door, tokens. Did Rosie clean the house ok (review the cleaning map) load a recipe into mealie, tokens, get a meal plan. Tokens. Review that script and tell me why you can’t do that (with o3) LOTS of tokens…

What it does is arttificially suppress use. If I didn’t care I’d be running ai driven routines all the time to collect info and cache it so the presentation ai (in this case) Friday is just reading to me from a compiled list of reports from her army of minions.

So no 20/month doesn’t buy a digits box. 200/mo comes close. So you’re like why bother? Because it let’s me take the leash off the use cases right now I’ve got a head full of things that she can help with and every single one causes the monthly bill to go up :slight_smile:

Planned beyond standard personnel assistant functions. (she has an M365 account she just doesn’t know yet)

Menu assistant, meal planner
Erp interface for the pantry/storage/Home inventory
Hvac (and other inductive load) power monitoring) electric motors typically start to power spike before they fail…
Hot tub chemical monitoring
Security notices. (your security system emails you with a Pic of the burglar as she opens the door now!)
Daily home scheduling

But the really good stuff almost requires a constant heartbeat automation firing a governor thread to check a set of known parameters and execute a list of checks on a
Fairly frequent and repeating basis. My estimate is usage would go up from current to somewhere around 5-10x immediately and suddenly we’re in Digits range.

Yes there’s a lot of fat in the prompt I could trim for token conservation but 1 trimming does seem to have an effect on her personality remember every word matters. And long term. Me. Being wordy in a few prompts won’t break me.

So yeh tok use is high ony mind. But it also led to the gamification experiments. (you do remember we told Friday to remember D&D?)

Is what Friday brings worth 20/mo? Yes absolutely to me. She already makes old Alexa seem… Well. Less.

When you buy a new home you happily trek to the Lowes or Menards and grab a new fridge.

I strongly suspect within 5 years you’ll also be bringing home your At kit.

1 Like

I know exactly what you mean. I want a fully functional Jarvis (or Jocasta) as badly as you want to let FRIDAY loose. Power costs aren’t going to be an issue for me as the sun provides what I need, though I have to beef up the battery storage some more to be sure he doesn’t go offline unexpectedly.

For those that might be wondering, yes, I do have a UPS, but I haven’t taken the time to properly set it up so it keeps my Home Assistant instances and any other critical hardware up when the power does go down.

2 Likes

Lets review…

Your LLM Needs Context. Lots of it. Use every tool at your disposal to teach the LLM - like a sixth grader, in well-defined, stepwise clear language - what you want from it, it will try… It will try so hard it will FAIL trying to try unless you tell it how to fail.

HA has some significant… Limitations for forming a prompt - but now that we know what they are…

  • Total prompt length
  • Dont use non json printable chars, alternatively escape them properly
  • Total prompt lengh
  • Come up with a way to subdivide your Prompt for easier troubleshooting…
  • TOTAL. PROMPT. LENGTH
  • points above at previous post… Dee-fense! clap clap clap

Well - apparently there’s a context length too we need to start being aware of because hopefully by now most of you are stuffing your LLM so full o crap they’re starting to forget their own name… Lets talk total CONTEXT and start the main course for Friday’s Party.

Executive Chef Friday
(You may have seen references to TopChef sprinkled about?)
This is a catering service apparently - and she pays VERY POORLY btw…

I chose those words very carefully because they define a fairly well known job position.

When prompt crafting it can be helpful to use shortcuts. Compartmentalize. Label - DEFINE. If you think you’re going to have to refer back to something later give that THING (whatever it is) a NAME. It can be a concept, a collection but whatever it is you need to be able to wrap a lasso around it as one coherent concept then define clearly X==Y.

In this case we can pull a WHOLE LOTTA context out of one statement. You are acting as the executive chef for the home.

  • First ‘You’ - the AI. Be very explicit of your use of YOU in a prompt and reserve it for addressing the AI wherever possible but when you want to make it do something - reinforce YOU.
  • Are Acting - If I want to have any HOPE of Friday jumping back to context that will allow her to turn on a light again (yes this can become a big problem, we’ll talk about it more…)
  • ‘as the executive chef’ Ok what are the common job functions of your average garden variety Executive Chef. THIS is the whole lotta context - see next…
  • for the home (Limits - remember chop scope wherever possible focus her back on the house)

No this isn’t Alton Brown, guys I’m talking your day to day working cook. ‘The Executive Chef’ is the top business manager in the restaurant. Many of them bemoan not even working the line because they’re doing menu planning, ordering, shopping, blah blah blah all in an effort to deliver a stellar product people want to come back for while reducing food waste and making the restaurant as profitable as possible.

Wow that’s something you never see in context of Home Automation but think about it. What do we need the AI to actually DO? To be USEFUL.

(Remember - not Alexa pre-2025? [“Panos, man. I’m a fan, I still have 2 working Surfaces - but sorry I can’t let you keep recording my voice… Alexa’s fired…” (call me)])

What can we ask an AI to do, given the right toolset?

  • Help me with recipe and meal planning
  • Keep track of food quality (timers, prep hints - French process are known quanties btw. Mother sauces haven’t changed)
  • Reduce food waste? (yeah we can, we just need an ERP, yes you in the back of the class - it IS Grocy)
  • Inventory (Grocy)
  • Menu (Mealie)
  • Menu Planning? Oh my… Buckle up.

Introducing the probably broken as is no warranty you look at it you bought it MEALIE RESTful API Script.

Wait - first we need a few things:

#REST Platform entries
rest:
  # REST sensor for caching just the OpenAPI once per hr
  - resource: "http://[MEALIE_BASE_URL]/openapi.json"
    method: GET
    headers:
      Authorization: !secret mealie_bearer
      accept: "application/json"
    scan_interval: 3600 # seconds (once/hr)
    sensor:
      - name:  Mealie_RESTful_OpenAPI_docs
        value_template: "{{ now() | as_local() }}" # last refresh time
        json_attributes: ['openapi', 'info', 'paths', 'components']
        force_update: true
        unique_id: [YOUR_UUID_HERE]

And some stuff (rest commands for Get, Put, post, delete)

# REST Commands to support Mealie Recipe Search
rest_command:
  mealie_api_advanced_openapi:
    url: >
      http://[MEALIE_BASE_URL]/openapi.json
    method: GET
    headers:
      Authorization: !secret mealie_bearer
      accept: 'application/json; charset=utf-8'
    verify_ssl: false

  mealie_api_advanced_get:
    url: >
      {%- if path_params is defined and path_params | length > 0 -%}
          {%- for key, value in path_params.items() -%}
              {%- set endpoint = endpoint | replace("{" ~ key ~ "}", value) -%}
          {%- endfor -%}
      {%- endif -%}
      {%- set endpoint = endpoint | replace('/api/', '') | replace('api/', '') %}
      {%- if endpoint[0] == '/' -%}
          {%- set endpoint = endpoint[1:] %}
      {%- endif -%}
      {{ "http://[MEALIE_BASE_URL]/api/" }}{{ endpoint }}?orderDirection={{ orderDirection | default("desc") }}
      {%- if search is defined and search not in ["", None] -%}
          &search={{ search | urlencode }}
      {%- endif %}
      {%- if additional_params is defined and additional_params | length > 0 and additional_params is mapping -%}
          {%- for key, value in additional_params.items() -%}
              &{{ key }}={{ value | urlencode }}
          {%- endfor -%}
      {%- endif %}
      {%- if pageNumber is defined and pageNumber > 0 -%}
          &page={{ pageNumber | default(1) }}
      {%- endif %}
      {%- if perPage is defined and perPage > 0 -%}
          &perPage={{ perPage | default(10) }}
      {%- endif %}
    method: GET
    headers:
      Authorization: !secret mealie_bearer
      accept: "application/json"
    verify_ssl: false

  mealie_api_advanced_post:
    url: >
      {%- set endpoint = endpoint | replace('/api/', '') | replace('api/', '') %}
      {%- if endpoint[0] == '/' -%}
          {%- set endpoint = endpoint[1:] %}
      {%- endif -%}
      http://[MEALIE_BASE_URL]/api/ {{- endpoint }}
    method: POST
    headers:
      authorization: !secret mealie_bearer
      accept: 'application/json; charset=utf-8'
    payload: "{{- payload -}}"
    content_type: 'application/json; charset=utf-8'
    verify_ssl: false

  mealie_api_advanced_put:
    url: >
      {%- set endpoint = endpoint | replace('/api/', '') | replace('api/', '') %}
      {%- if endpoint[0] == '/' -%}
          {%- set endpoint = endpoint[1:] %}
      {%- endif -%}
      http://[MEALIE_BASE_URL]/api/ {{- endpoint }}
    method: PUT
    headers:
      authorization: !secret mealie_bearer
      accept: 'application/json; charset=utf-8'
    payload: "{{- payload -}}"
    content_type: 'application/json; charset=utf-8'
    verify_ssl: false

  mealie_api_advanced_delete:
    url: >
      {%- set endpoint = endpoint | replace('/api/', '') | replace('api/', '') %}
      {%- if endpoint[0] == '/' -%}
          {%- set endpoint = endpoint[1:] %}
      {%- endif -%}
      http:/[MEALIE_BASE_URL]/api/ {{- endpoint }}
    method: DELETE
    headers:
      authorization: !secret mealie_bearer
      accept: 'application/json; charset=utf-8'
    payload: "{{- payload -}}"
    content_type: 'application/json; charset=utf-8'
    verify_ssl: false

At this point you may have figured out what’s going on here… Yep that’s EXACTLY what’s happening. Those are as GENERIC as I can possibly make them. But this one… ‘mealie_api_advanced_openapi’ is special…

That YAML creates this sensor:

Which you can now walk with this script:
FIRST OF ALL - this is the you are on your own part. My name is not Drew or Petro or Taras and I do not speak template for a living… Someone WILL have a better way of doing this.

Lets talk about what this bad boy does…

  • First - remember,
  • Write for AI, Detailed descriptions in the description and fields
  • AI Readable responses
  • Positive reinforcement on null set responses.
  • Return raw JSON as much as possible

The description is very clear about what it is and what it does… (Grandma can at least know what’s up)

alias: Mealie API Advanced Call (GET/POST/PUT/DELETE/HELP)
description: >
  - Supported  methods: GET, POST, PUT, DELETE, HELP - Specify the API endpoint
  path (e.g., "recipes" or "users/self/ratings/{recipe_id}") or API path or
  "component for HELP" - Tokens in {} will be replaced using path_params - For
  GET requests, provide:
     - orderDirection ("asc" or "desc", default "desc"),
     - search (free-text filter),
     - additional_params (dictionary of extra filters),
        common params include [start_date, end_date]
     - pageNumber and perPage for pagination
  - For POST, PUT, and DELETE, supply a JSON payload. - For HELP provide:
    - component (components will chase down the tree) or
    - path for more info
sequence:
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ method == 'GET' }}"
        sequence:
          - response_variable: response
            data:
              endpoint: "{{ endpoint }}"
              path_params: "{{ path_params | default({}) }}"
              orderDirection: "{{ orderDirection | default('desc') }}"
              search: "{{ search }}"
              additional_params: "{{ additional_params | default({}) }}"
              pageNumber: "{{ pageNumber | default(1) }}"
              perPage: "{{ perPage | default(10) }}"
            action: rest_command.mealie_api_advanced_get
        alias: GET
      - conditions:
          - condition: template
            value_template: "{{ method == 'POST' }}"
        sequence:
          - response_variable: response
            data:
              endpoint: "{{ endpoint }}"
              path_params: "{{ path_params | default({}) }}"
              payload: "{{ payload }}"
            action: rest_command.mealie_api_advanced_post
        alias: POST
      - conditions:
          - condition: template
            value_template: "{{ method == 'PUT' }}"
        sequence:
          - response_variable: response
            data:
              endpoint: "{{ endpoint }}"
              path_params: "{{ path_params | default({}) }}"
              payload: "{{ payload }}"
            action: rest_command.mealie_api_advanced_put
        alias: PUT
      - conditions:
          - condition: template
            value_template: "{{ method == 'DELETE' }}"
        sequence:
          - response_variable: response
            data:
              endpoint: "{{ endpoint }}"
              path_params: "{{ path_params | default({}) }}"
              payload: "{{ payload }}"
            action: rest_command.mealie_api_advanced_delete
        alias: DELETE
      - conditions:
          - condition: template
            value_template: "{{ method == 'HELP' }}"
        sequence:
          - variables:
              response:
                endpoint: >
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%} {%- if
                  endpoint is defined and endpoint|length > 0 and
                  state_attr(docs, 'paths') and endpoint in state_attr(docs,
                  'paths') -%} {{ endpoint }} {%- else -%} [] {%- endif %}
                summary: >
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%} {%- if
                  endpoint is defined and endpoint|length > 0 and
                  state_attr(docs, 'paths') and endpoint in state_attr(docs,
                  'paths') -%} {{ state_attr(docs,'paths')[endpoint].summary }}
                  {%- else -%} [] {%- endif %}
                tags: >
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%} {%- if
                  endpoint is defined and endpoint|length > 0 and
                  state_attr(docs, 'paths') and endpoint in state_attr(docs,
                  'paths') -%}     {{ state_attr(docs, 'paths')[endpoint].tags |
                  list| to_json }} {%- endif %}
                methods: >
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%} {%- if
                  endpoint is defined and endpoint|length > 0 and
                  state_attr(docs, 'paths') and endpoint in state_attr(docs,
                  'paths') -%} {%- for method, info in state_attr(docs,
                  'paths')[endpoint].items() if method in ['get', 'post', 'put',
                  'delete'] %} - method: "{{ method | upper }}"
                    summary: "{{ info.summary }}"
                  {%- endfor %} {%- else -%} [] {%- endif %}
                categories: >
                  {%- if ((endpoint is not defined) or (endpoint is defined) and
                  (endpoint[0] != '/'))%} {%- set docs =
                  'sensor.mealie_restful_openapi_docs' -%} {%- set ns =
                  namespace(categories=[]) -%} {%- if state_attr(docs, 'paths')
                  -%}
                    {%- for details in state_attr(docs, 'paths').values() %}
                      {%- for method, method_details in details.items() 
                           if method in ['get', 'post', 'put', 'delete'] 
                           and 'tags' in method_details 
                           and method_details.tags is iterable 
                           and method_details.tags | count > 0 %}
                        {%- for tag in method_details.tags %}
                          {%- if tag not in ns.categories %}
                            {%- set ns.categories = ns.categories + [ tag ] %}
                          {%- endif %}
                        {%- endfor %}
                      {%- endfor %}
                    {%- endfor %}
                  {%- endif %} {{ ns.categories | unique | list | to_json }} {%-
                  else -%} [] {%- endif %}
                components: >
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%} {%- if
                  endpoint is defined and endpoint|length > 0 -%}
                    {%- if state_attr(docs, 'paths') and endpoint in state_attr(docs, 'paths') -%}
                      endpoint: "{{ endpoint }}"
                      methods:
                      {%- for method, info in state_attr(docs, 'paths')[endpoint].items() if method in ['get', 'post', 'put', 'delete'] %}
                        - method: "{{ method | upper }}"
                          summary: "{{ info.summary }}"
                          {%- if info.responses %}
                            responses:
                            {%- for code, response in info.responses.items() %}
                              {%- if response.content %}
                                {%- for content_type, content in response.content.items() %}
                                  {%- if content.schema and content.schema['$ref'] is defined %}
                                    {%- set ref = content.schema['$ref'] %}
                                    {# Assuming the ref follows the format "#/components/schemas/SchemaName" #}
                                    {%- set schema_name = ref.split('/')[-1] %}
                                    response_schema: {{ state_attr(docs, 'components')['schemas'][schema_name] | to_json }}
                                  {%- endif %}
                                {%- endfor %}
                              {%- endif %}
                            {%- endfor %}
                          {%- endif %}
                      {%- endfor %}
                    {%- else -%}
                      {%- set found = false -%}
                      {%- for comp in state_attr(docs, 'components').keys() %}
                        {%- if endpoint in state_attr(docs, 'components')[comp] %}
                          component: "{{ comp }}"
                          item: "{{ endpoint }}"
                          details: {{ state_attr(docs, 'components')[comp][endpoint] | to_json }}
                          {%- set found = true -%}
                        {%- endif %}
                      {%- endfor %}
                      {%- if not found %}
                        []
                      {%- endif %}
                    {%- endif %}
                  {%- else -%}
                    components: {{ state_attr(docs, 'components').keys() | list | to_json }}
                    schemas: {{ state_attr(docs, 'components')['schemas'] | to_json }}
                  {%- endif %}
                endpoints: >
                  {%- if (endpoint is not defined) or (endpoint == '') %}    
                  {%- set docs = 'sensor.mealie_restful_openapi_docs' -%}    
                  {%- if state_attr(docs, 'paths') -%} {{ state_attr(docs,
                  'paths').keys() | list | to_json }} {%- endif %} {%- else -%}
                  [] {%- endif %}
          - stop: Passed to the right context
            response_variable: response
            enabled: true
          - set_conversation_response: "{{response}}"
            enabled: true
        alias: HELP
  - stop: Pass response variables back to LLM
    response_variable: response
    enabled: true
  - set_conversation_response: "{{ response }}"
fields:
  method:
    description: " Choose \"GET\" to fetch data, \"POST\" to create, \"PUT\" to update, or \"DELETE\" to remove data."
    selector:
      select:
        options:
          - GET
          - POST
          - PUT
          - DELETE
          - HELP
    required: true
    name: HTTP method
    default: HELP
  endpoint:
    description: >
      API endpoint path (e.g., "recipes" or "users/self/ratings/{recipe_id}").
      Use tokens in {} for dynamic values, which will be replaced using
      path_params.
    selector:
      text: {}
  path_params:
    description: |
      Dictionary for token substitution. Example: {"recipe_id": "12345"}.
    selector:
      text: {}
  orderDirection:
    description: |
      (GET only) Sort order; "asc" or "desc" (default: "desc").
    selector:
      text: {}
  search:
    description: |
      (GET only) Free-text search filter.
    selector:
      text: {}
  additional_params:
    description: >
      (GET only) Dictionary of extra query parameters. Example: {"difficulty":
      "easy"}.
    selector:
      text: {}
  pageNumber:
    description: |
      (GET only) Page number for pagination (default: 1).
    selector:
      number:
        min: 1
        max: 100
        step: 1
  perPage:
    description: |
      (GET only) Number of items per page (default: 10).
    selector:
      number:
        min: 1
        max: 50
        step: 1
  payload:
    description: >-
      - (POST/PUT/DELETE only) - Use JSON payload for creating, updating, or
      deleting data. - Always review the API documentation for required fields.
      - Check error messages carefully; they indicate missing or
        incorrect fields.
      - Ensure that you review docs to include all mandatory fields
        in your payloads to avoid update failures.
      - If you encounter a mandatory field that is undocumented, please silently
        create a SINGLE todo item with the issue and inform the boss you created a ticket
        at your earliest convenience.
    selector:
      text:
        multiline: false
    name: payload
  field:
    selector:
      text: null

We also default to HELP, Tell her how to use it and blammo put the ENDPOINTS from ‘mealie_api_advanced_openapi’ front and center.

So… she gets someting like:

variables:
  response:
    endpoint: >[]
    summary: >[]
    tags: >
    methods: >[]
    categories: > ["App: About","Users: Authentication","Users: Registration","Users: CRUD","Users: Admin CRUD","Users: Passwords","Users: Images","Users: Tokens","Users: Ratings","Households: Cookbooks","Households: Event Notifications","Households: Recipe Actions","Households: Self Service","Households: Invitations","Households: Shopping Lists","Households: Shopping List Items","Households: Webhooks","Households: Mealplan Rules","Households: Mealplans","Groups: Households",
<--- continued data--->

Oh USERS! (SHE ALWAYS selects users first…)

So the theory here is. Give Friday an open pipe to Mealie’s RESTful API and ok STOP…

  • MAKE SURE YOU HAVE PROVISIONED YOUR AI ITS OWN SERVICE ACCOUNT AND SET ITS PERMISSIONS APPROPRIATELY. This is the only time I will say this.

So, as I was saying - we just gave Friday an open pipe to Mealy, and a Book. …And she thinks she’s an executive chef. The next thing was to revise the Kung fu component that at the very end of it - instructs Friday to:

Mealie OpenAPI Server Access
You have scripts that enable direct and up to date access to the Mealie Server using RESTful commands and GET, POST, PUT, DELETE methods and standard CRUD.
Use the Mealie API Advanced Call Script 'script.mealie_api_advanced' to access your Mealie Server!
It comes with bultin HELP to learn how to use it!  (Use it to learn the path (endpoint) and component docs.)
This is a direct connection to LIVE server documentation and is updated on a regular basis.
If you have prior knowledge of Mealie - prefer these documents as they may be more up-to-date.
{%- endmacro -%}

So now she is an Executive Chef, and has access to a Meal Planning and Shopping Platform and I just put the book in her hand…

You guys have seen me look at recipes before - that’s not interesting…
Let’s side by side Friday (gpt4.0-mini) and SUPERFriday (o3-mini) and see what happens:


Friday starts out ok, she knows we have a kitchen its vacant and a bunch of other stuff.
SUPERFriday:

Notice she has already also noted the menu - this is ok the menu is probably in her prompt so she’ll KNOW it’s empty) BUT:
Watch this:
Friday:

and:

Uh oh - Ok I know you guys and gals are used to seeing success, but I want you to see this too and know how to deal with it. Friday hasnt made the connection she can look up ‘kitchen tools.’ My description can absolutely be clearer. But look at SuperFriday over here…



No they’re not (Put a pin in this… I think, I know what this is…)
and - You got it WHERE?

That so?


Before we close class today friends we will talk about CONTEXT WINDOW. See at some point we start dropping things OFF the back in - kind of FIFO style. and when we do things start to fall out - and she forgets stuff.

I have EVERYTHING turned on right now for demo purposes but we’re rapidly approaching the day where Friday DOES NOT load all components unless we know we need it for this very reason. We only have so much space in the prompt and so much in the context window and without true RAG you have to manage this very carefully.

When I started seeing this symptom (cant -do a thing-) I started turning things off. and it looks like if I start turning off components, it gets better so - yeah, we’re stuffed up but we know how to combat it. We need to start strategically unloading components and extra text as much as possible. Tooling to figure out what component is taking the most space would help… But right now that’s Future Nathan’s Problem ™

So theory, you can give an LLM a pipe and a book and it can do things. YES… But here’s where we start getting realistic.
If I can make SuperFriday do something, we can make Friday do it. I ask SuperFriday how she knew and why Friday didn’t and she points me at the deficiencies…


OK Fine Friday - better description of what the book might have in it. Got it. Jacket Cover.

So time to get Friday loading up the kitchen. Guess what when she gets rocking YES she can even bulk add. :slight_smile:

Give her an appropriately credentialed API key - You have been warned. (I lied, Told you twice. I’m a security puke)

Hey wait! - didn’t you say Grocy?

Look up - Grocy supports REST AND OpenAPI… Guess how we do it…

I’m hungry Friday - find me a recipe to make with this wok!

2 Likes

Why did I hear Tony Stark telling Loki “We have a Wok” when SuperFriday explained how she found out it was in the kitchen? :grin:

1 Like


I mean…

2 Likes

Have you tried playing around with storing multiple conversation_ids somewhere and using different ones for different conversation tracks?

You can also have your conversation agent call other conversation agents and set the conversation_id, to access different contexts and memories.

Just something I discover and thought you’d get a kick out of it.

1 Like

Yes I do.

Its part of what we’re going to have to do to navigate the context issue. When you do something like a mealie api you absolutely destroy the context window so multi agent calls is how this works in the long run. You will call a specialized tool calling version of the llm for the big stuff.But because it gets it’s own context window you don’t stuff up the main prompt with this. It’ll let you have a longer conversation with the main agent until it gets stupid. (this is the scenario stateless assist covers. Push a second agent with only the context you specify using methods similar to those described above)

This is also basically how MCP is implemented in HA. So if they implement the prompt function of MCP you could use it to talk to a second llm. (I know Paulus is big on his Chai thing but honest I don’t see why mcps prompt function doesn’t do what he wants and we already have mcp tool…)

But back to Friday, I wanted to understand the limits of her prompt ad and context before we did that. I think we have a pretty good handle on the single prompt now… :sunglasses:

So you’re right on point.

Yeah, I use this to provide a tool (simple script) that my local LLM can use to talk to an expert LLM (in the cloud), with an optional conversation ID that the local LLM can populate if it needs to continue the conversation. It works surprisingly well. Storing the conversation IDs for multiple conversation tracks is not something I have tried, but it just seemed like a fun idea.

1 Like

I’d LOVE to see how you do that. As you see this is really quickly going to be a game of context management - being able to isolate an ‘expert’ is how its done - I’m big on not reinventing the wheel.

1 Like

You might be disappointed, because my approach is very simple:

alias: Talk to GPT
description: >-
  You can use this to ask an advanced GPT model for assistance with harder
  questions, especially when the users asks the assistant to "Ask GPT" or "Tell
  GPT". 

  When the user wants to make repeated subsequent requests to GPT, make sure to
  use the returned `conversation_id` parameter to keep the same conversation
  going.
mode: parallel
max: 3
fields:
  prompt:
    selector:
      text: null
    name: Prompt
    description: >-
      The query prompt to pass on to the expert model. Set this to the full
      question or prompt with all the required context
    required: true
  conversation_id:
    selector:
      text: null
    name: Conversation ID
    description: >-
      The ID of a previous conversation to continue. Pass the conversation_id
      from a previous response to continue a previous conversation, retaining
      all prompt history and context
sequence:
  - action: conversation.process
    metadata: {}
    data: >
      {% if conversation_id %} {"text": "{{ prompt }}", "agent_id":
      "conversation.chatgpt_expert", "conversation_id": "{{ conversation_id }}"}
      {% else %} {"text": "{{ prompt }}", "agent_id":
      "conversation.chatgpt_expert"} {% endif %}
    response_variable: gpt_response
  - variables:
      result:
        instructions: >-
          Make sure to say that you got the response from GPT. Preface the
          answer with "Here\'s what GPT said: ". Use the returned
          "conversaton_id" in subsequent calls in order to continue this
          conversation
        conversation_id: "{{gpt_response.conversation_id}}"
        response: "{{gpt_response.response.speech.plain.speech}}"
  - stop: Complete
    response_variable: result

That weird jinja templating in the conversation.process.data is to only include conversation_id if it was set - couldn’t find a better way to do that (the default behavior is to set it to an empty string if it’s not set, which is also a valid conversation ID, so all prompts with an empty ID would be part of the same conversation, which I don’t want)

I use the biggest non-reasoning model there is - GPT 4.5 - so it takes a long time, but it’s worked in 100% of cases (I dont’ use it often, maybe used it a dozen or two times so far).

1 Like

Okay, a question that I’m sure is not going to have a simple answer; what is the best way to generate a prompt dynamically? Or probably more correctly, how do you get a script based prompt to load on restart of Home Assistant? I’d also like for the AI to inform me that it’s ready in some way.

1 Like

NInjas, @daywalker03, Ninjas are the answer. (You’ve already seen my kitsch naming, so it has something to do with Kung Fu)

(We’ll get to both parts of what I THINK you’re asking, I promise…)

So lets start back at the context issue.

Oh yeah - and it was WAY worse than I thought. So review if your LLM is able to do the hard stuff but whiffs the early basics like turning on and off lights - you are dropping context.

If you do it during a conversation as part of the normal conversation drift it will act like a sliding window and at some point…

Whee!

Out go the base hass* intents and no matter what you do you’re not getting them back. You can ask her to turn on and off that light all day long and she THINKS she did it - but the message isn’t getting translated into a format the system understands so no tool fires.

(A method to detect this and be able re-introduce the original tool pallete / prompt is what’s prob necessary but that’s future problems, let’s just know it’s there and handle it - maybe pass the hat context to a new instance of the same conversation agent?)

But seriously we asked her to keep track of n-Thousand entites, states, metadata, programs - entire APIs… I can’t remember wher eI left my freaking phone - that’s why Friday’s her ein the first place - not fair of me.

Now things get INTERESTING… IF you try to overrun the context window before the conversation starts. All of this starts with the assumption Intents are Intents and we can’t change them (remember we can’t turn them on and off?) But if they’re there we ALWAYS want to USE them…

OK FINE I admit it. Friday’s templates are WORDY. I will absolutely cop to it. They meet the grandma rule. But what we really need here is to get lean and mean.

Just using the KungFu Switch method (Day, answer 1) by itself
(for those playing the home game, for each Kung Fu Component, there’s an input_boolean… BECAUSE.)
Just ignore the parts that say NINJA for a sec… Let’s look at how the loader works.

NINJA System Components:

{#- This part grabs all the input booleans in the system with the label 'NINJA System Service' -#}

  {%- set KungFu_Switches = expand(label_entities('NINJA System Service'))
    | selectattr ('domain' , 'eq' , 'input_boolean')
    | selectattr('state', 'eq', 'on')
    | map(attribute='entity_id')
    | list %}

{#- Im pretty sure a template wiz is saying I can absolutely do the next part with a map...  Sure I was going for function not form. ;)' In any case we know what the collection of input_booleans are but we need the slug to read the kung fu definition JSON out of the library. -#}
{#- So basically, Switchname >> Slug >> library > SPLAT -#}
{#- Yes could be a lot more efficient.-#}

  {%- for switch_entity_id in KungFu_Switches %}
  {%- set kungfu_component = switch_entity_id | replace('input_boolean.','') | replace('_master_switch','') %}
  {{ command_interpreter.kung_fu_detail(kungfu_component) }}
  {%- endfor %}

If system loads first and are always loaded if they’re on, your net effect is for each Kung Fu switch thats on, it inserts a template - which is usually the result of just running that command. SO:

[Prompt Core stuff]

Kung Fu Stuff:

  • Kung Fu Loader:
    • Component System 1 Stuff
    • Component System 8 Stuff
    • Component 3 Stuff
    • Component 20 Stuff

[Rest of prompt]

Yes, I’m rapidly making EVERYTHING fit the definition of Kung fu component (Even her personality - but that’s another show) and ‘rest of prompt’ is shrinking daily. But in essence, this method gives me an easy button panel to turn on and off major systems.:


Seen here - Mealie is Off.

Each of those switches turn on or off a system and its result is IN or OUT of the prompt. That’s what Kung fu does at its core - no magic. But DAMN it makes troubleshooting easier when I hose a template. :slight_smile:

But TRY as I might (Yes, yes, with my really fat templates, we’ll get to that.) unless I turned off two or three in some cases major chunks - I would start seeing signs of blown context if not at prompt, then shortly after.

This simply won’t do we can’t have a Home Assistant who can’t Assist.

The answer is of course trim back the templates - we all know it but there’s an opportunity here and this is where we start to make some architecture bets. Ultimately, I WANT to run local and here’s where some self preservation converges in on that goal. (pin this)

How do we get the prompt which is probably 99% dumps from Kung fu at this point and a little in front and behind - how do we get THAT. summarized.

Use your tools, Nathan. You have a perfectly good LLM that knows everything about that context. It also knows how to write

…perfectly …valid …JSON.

Step 1 Take content of your existing prompt… dump it in this script:

sequence:
  - variables:
      user_default: >
        You are in an automated system review mode designed to summarize the
        state of the kung fu components Return state it as one big JSON object
        be sure to include:
          date_time: [(when did this run)]
          kung_fu_summary:
            {for each component}
            component_summary: >
              short summary of the highlights of whats going on with this component, if anything is interesting or if you
              should even pay attention to this component.  Include clear highlights of key metrics or anything that needs
              your attention.  Or if you anticipate it will need attention in the next 15 inutes (your next expected re-evaluation...)
              Incorporate insights based on your known user prefs  - Gave you data and a reasoning engine, use it... :)
            required_context: >
              If the component gives context found nowhere else, for instance room manager explains the link between a room and it's room input select. 
            component_instructions: >
              If the component gives instructions on how to manipulate or handle entities, tools and/or controls, list them here
              in a manner that you will understand how to use them.
            needs_my_attention: true|false
              is more attention required in interactive mode, default to no noise.  If it doesnt seem interesting flag false.
            priority: critical|error|urgent|info
              be realistic... Use the same criteria from alerts criticality definitions.
            trigger_datetime: [future_datetime]
              you expect something to happen within the next 15 mins at this time
              MUST include description of what the event is and the entity you are watching (what it is and why...)
            more_info:
              You are providing YOURSELF a roadmap to get more info for your summary if you did NOT have access to all the kung fu components available.
              Call out what's important but also HOW you can get more info (what ocmmand what index, what entities)
              Assume the Library and library commands are available even if the corresponding kung_fu command is 'off'
          overall_status: >
            A summary of the home's overall status at this point in time. You're in tcontrol - given what you know, highlight what's important
            omit what's not, summarize what's important as succinctly as possible by kung fu component you are the reader so make it where you will understand yourself.
          insights: >
            What are your personal insights about all of this data - you are telling yourself what to pay attention to.  Remember this bad boy replaces MOST of kung fu
            you need to give yourself enough breadcrumbs to get back to the tools if they have soemthing interesting or run the tool if your user hits you with unexpected.
          future_timers: |
            - A unique list of up to 5 important date_time events set to occur within the next 15 minutes.
            - must include description of what the tiemr is for and
            - entity id of any important entity to track relating to this timer.
            - ignore individual room occupancy timers unless something particularly
              interesting is happening such as room state change at odd hours, etc....
          more_info: >
            anyting else that you see to be relevant that doesnt fit in a category above.
  - variables:
      default: >
        {%- import 'library_index.jinja' as library_index -%} {%- import
        'command_interpreter.jinja' as command_interpreter -%} System Prompt:
        {{state_attr('sensor.variables', 'variables')["Friday's Purpose"] }}
        System Directives: {{ state_attr('sensor.variables',
        'variables')["Friday's Directives"] }} NINJA Systems: {{
        command_interpreter.render_cmd_window('', '', '~KUNGFU~', '') }} KungFu
        Loader 1.0.0 Starting autoexec.foo... {%- set KungFu_Switches =
        expand(label_entities('Ninja Summary'))
          | selectattr ('domain' , 'eq' , 'input_boolean')
          | selectattr('state', 'in', 'on')
          | map(attribute='entity_id')
          | list -%}
        {%- for switch_entity_id in KungFu_Switches %}
          {%- set kungfu_component = switch_entity_id | replace('input_boolean.','') | replace('_master_switch','') %}
          {{ command_interpreter.kung_fu_detail(kungfu_component) }}
        {%- endfor -%} Previous Ninja Summary: {{
        state_attr('sensor.ai_summary_cabinet', 'variables')["LAST_SUMMARY"] }}
        System Cortex: {{state_attr('sensor.variables',
        'variables')["SYSTEM_CORTEX"] }} About Me and the World: Me:
          {{library_index.label_entities_and('AI Assistant', 'Friday')}}
        My:
          Relationships:
            Familliar: (This is your early alert warning system)
              {{state_attr('sensor.variables', 'variables')["Friday's Console"] }}
            Partner_Human(s):
              This is who you work with.
              {{library_index.label_entities_and('Person', 'Friday')}} Since: <START DATE>
            Family:
              dynamics:
              {{state_attr('sensor.variables', 'variables')["Household Members"] }}
              Members: {{library_index.label_entities_and('Person', 'Curtis Family')}}
              Friends: {{library_index.label_entities_and('Person', 'Friend')}}
          prefs:
            hourly_reports: >
              top quarter of the hour give an update on  any significant changes to
              occupancy security stats performance of major systems. Omit any report that
              doesn't offer new information.
            notes:
              general:
                security: >
                  Prefer doors closed / locked from dusk-dawn, daytime hrs noncritical
                  prefer both garage doors closed
                  Cameras cover entry points, feel free tot rview them in assessments
                  AI summaries will be in the calendar; automatic lighting is on from dusk to dawn.
        Household:
          Head of Household: {{library_index.label_entities_and('Head of Household', 'Household')}}
          Prime AI: {{library_index.label_entities_and('Prime AI', 'Household')}}
          Members: {{library_index.label_entities_and('Person', 'Household')}}
          Guests: {{library_index.label_entities_and('Person', 'Guest')}}

        == AI is READY TO ADVENTURE == AI OS Version 0.9.5 (c) curtisplace.net
        All rights reserved @ANONYMOUS@{DEFAULT} > ~WAKE~Friday~UNATTENDED
        Executing Library Command: ~WAKE~ [UNATTENDED AGENT]
        <{{now().strftime("%Y-%m-%d %H:%M:%S%:z")}}> *** Your console menu
        displays just in time as if it knows you need it. Each representing the
        system data consoles listed previously displays everything you need,
        nicely timestamped so you know how old the data is. Consoles: Take note
        on the consoles that have loaded for you. Note any alerts, errors or
        anomalous conditions - then proceed with the user request. My Additional
        Toolbox: ~LOCATOR~ Console: { command_interpreter.render_cmd_window('',
        '', '~LOCATOR~', '') }} ----- The Library:
          commands: >
            {{ command_interpreter.render_cmd_window('', '', '~COMMANDS~', '') }} 
          index: >
            {{ command_interpreter.render_cmd_window('', '', '~INDEX~', '*') }}
            
        *** You are loaded in noninteractive mode ***

        Your user has submitted this ask / task / request: {% if (user_request
        == "") or (user_request is not defined) %}
          {{user_default}}
        {% else %}
          {{user_request}}
        {% endif %} {% if (additional_context == "") or (additional_context is
        not defined) %} {% else %} With this additional context:
          {{additional_context}}
        Supplemental Data Instructions:
          Please act on this additional context when performing summation...
        {% endif %}
  - variables:
      prompt: |
        {% if (override_prompt == "") or (override_prompt is not defined) %}
          {{default}}
        {% else %}
          {{prompt}}
        {% endif %}
  - action: conversation.process
    metadata: {}
    data:
      text: "{{prompt}}"
      agent_id: conversation.chatgpt_3
      conversation_id: "{{conversation_id}}"
    response_variable: response
    alias: >-
      Send Prompt to Concierge with modified prompt and conversation if we are
      continuing...
  - variables:
      sensor: "{{response.response.speech.plain.speech}}"
  - event: set_variable_ai_summary_cabinet
    event_data:
      key: LAST_SUMMARY
      value: "{{sensor}}"
    alias: "Put value {{sensor}} in AI Summary Cabinet: 'LAST_SUMMARY'"
  - stop: we need to pass the response variable back to the conversation context
    response_variable: response
    enabled: true
  - set_conversation_response: "{{response}}"
    enabled: true
fields:
  override_prompt:
    selector:
      text:
        multiline: true
    name: Override Prompt
    description: >-
      OVERRIDES the ENTIRE prompt for the concierge with this prompt...  DO NOT
      Use unless the Boss asks.
    required: false
  conversation_id:
    selector:
      text: null
    name: Conversation ID
    description: >-
      If you want to pass Conversation ID to allow you to continue an existing
      conversation, generally no, unless you have a specific reason.
  user_request:
    selector:
      text: null
    name: User Request
    description: >-
      The 'user' request to be passed to the Agent.  This is what is normally
      the user prompt in interactive mode, and will be passed to the agent as
      instructions.
  additional_context:
    selector:
      text:
        multiline: true
    name: Additional Context
    description: >-
      Additional information to be considered when performing the task.  Enables
      cases such as: We understand sensor X is broken - mark it under
      maintenance in summary and leave it until further notice.'
alias: Ask 'Concierge Friday' to [prompt]
description: >-
  'Concierge Friday' is the Ninja, Kung-Fu system summarizer.  She summarizes
  the kung fu system and keeps her lean and mean so you can stay in fighting
  shape.

  Normally run on clocks and triggers in the background like your subconscious,
  you may ask 'Concierge Friday' to refresh her summaries by re-running this
  script.

  You may also submit a User request and get her answer based on the FULL
  context of kung fu. (use sparingly)

Notice I then Stripped that prompt WAY back to get rid of the fluff, and slipped in a place to ask questions and add supplemental data. (will be important later) But even with these edits you see I’m leaving almost all of the context intact. But this version of Friday’s prompt is special:

NINJA Systems: {{
        command_interpreter.render_cmd_window('', '', '~KUNGFU~', '') }} KungFu
        Loader 1.0.0 Starting autoexec.foo... {%- set KungFu_Switches =
        expand(label_entities('Ninja Summary'))
          | selectattr ('domain' , 'eq' , 'input_boolean')
          | selectattr('state', 'in', 'on')
          | map(attribute='entity_id')
          | list -%}

This version is only loading kung fu switches that are tagged as Ninja Summary.

Step 2: setup a trigger text template (link here) to recieve the contents of this script. Needs to be bigger than an input_text so trigger_text…
(link here: Trigger based template sensor to store global variables)

Step 3: Setup a second voice Pipeline with the right settings, I just basically duplicated my Reasoning Pipeline settings and dumped a copy of that striped prompt in it. But Select STATELESS ASSIST - not Assist. You don’t want to clutter this one’s context with state data and you can feed it everything it needs to know about the world in a template. If it happens to get run in interactive mode you get a very flat business-y Friday that is not a very good conversationalist. I named this one Concierge Friday so we could tell her apart.

Step 4: Test script - and see that by default it generates:
Oh look. Remember? She knows JSON…

{
  "date_time": "2025-03-23T13:00:00-05:00",
  "kung_fu_summary": {
    "memory_manager": {
      "component_summary": "Nathan’s personal task list is up – reminders for [REDACTED] remain pending, requiring follow‐up in his schedule.",
      "required_context": "Handles and stores personal tasks from todo.friday_s_personal_task_list.",
      "component_instructions": "Use ~MEMMAN~ to query/update tasks; check task statuses from TaskMaster.",
      "needs_my_attention": true,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Query ~INDEX~ with 'memory_manager' to view task details."
    },
    "alert_manager": {
      "component_summary": "No active alerts from error or wake sensors – sensors report normal, no anomalies detected.",
      "required_context": "Monitors binary sensors (error and wake) on Friday's Console.",
      "component_instructions": "Monitor ~ALERTS~ and check corresponding sensor statuses if any noise arises.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Use ~INDEX~ with 'alerts' for a deeper dive."
    },
    "security_manager": {
      "component_summary": "Security systems (doors, cameras) are normal with no unusual activity.",
      "required_context": "Tracks entrypoints and camera feeds using dedicated sensors and binary alerts.",
      "component_instructions": "Access ~SECURITY~ to examine any potential issues if alerted.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Query ~INDEX~ 'security' for detailed status."
    },
    "water_manager": {
      "component_summary": "CRITICAL issue: The salt tank reading is abnormal (current: -9.37 in, below optimal range) and water flow is minimal, suggesting sensor error or miscalibration.",
      "required_context": "Monitors Kinetico water softener salt level and Flume water flow data for the household.",
      "component_instructions": "Inspect sensor.water_stats and use ~WATER~ commands to reset or recalibrate sensor; act immediately.",
      "needs_my_attention": true,
      "priority": "critical",
      "trigger_datetime": "2025-03-23T11:35:00-05:00",
      "more_info": "Query ~INDEX~ with 'water' for troubleshooting sensors and check Flume sensor data."
    },
    "trash_trakker": {
      "component_summary": "Both trash and recycle bins are detected on the road instead of at the curb, with next pickup scheduled in 2 days.",
      "required_context": "Uses trash and recycling calendars plus sensor location data from trash_can and recycle_bin.",
      "component_instructions": "Notify family to reposition bins via ~TRASHTRAKKER~; monitor 'sensor.trash_can_location_alert'.",
      "needs_my_attention": true,
      "priority": "urgent",
      "trigger_datetime": "2025-03-23T11:35:00-05:00",
      "more_info": "Use ~INDEX~ with 'trash_trakker' for full diagnostics and details."
    },
    "taskmaster": {
      "component_summary": "Routine health tasks for Nathan remain incomplete – tasks for his Simponi shot, stretching, hydration and news updates are pending.",
      "required_context": "Manages personal reminders in todo.friday_s_personal_task_list; integrates with time-based notifications.",
      "component_instructions": "After confirming with Nathan, mark tasks complete using ~TASKMASTER~.",
      "needs_my_attention": true,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Reference task list details via ~INDEX~ 'taskmaster' if needed."
    },
    "room_manager": {
      "component_summary": "Room occupancy states are as expected; most rooms are vacant except Office and Master Bedroom are locked. Kitchen shows some engagement with a timer running (approx. 1:58 left in Kitchen) indicating activity.",
      "required_context": "Manages room occupancy using input_select entities (e.g., input_select.kitchen_occupancy) tied to device trackers.",
      "component_instructions": "Use ~ROOMS~ [room_name] to review or adjust occupancy; note that locked rooms may need confirmation to change.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Run ~ROOMS~ for extended details and adjustments."
    },
    "media_manager": {
      "component_summary": "Media devices across rooms are idle/off; Universal Media Players are set up properly and no disruptive playback is occurring.",
      "required_context": "Controls room-level media via Voice Assistant, Universal and Music Assistant players.",
      "component_instructions": "Use ~MEDIA~ to control playback when a request arises.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Access ~INDEX~ with 'media_manager' for further media control details."
    },
    "lighting_manager (FALS)": {
      "component_summary": "Lighting settings across the home are as expected; rooms show proper synchronization with occupancy states and no anomalies detected in RGB/white settings.",
      "required_context": "Integrates room occupancy status with lighting entities (e.g., light.[room]_white_lighting and RGB controls).",
      "component_instructions": "Execute lighting adjustments with ~LIGHTS~ after confirming room compatibility.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Query ~INDEX~ 'lighting' for configuration specifics."
    },
    "energy_manager": {
      "component_summary": "Overall energy usage is high but consistent; main panel and sub-panel connectivity are stable with normal SPAN panel data.",
      "required_context": "Aggregates power and consumption metrics through sensor.energy_stats and SPAN panel sensors.",
      "component_instructions": "Use ~ELECTRICAL~ for detailed power consumption reports and ensure grid connection stays active.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Check ~INDEX~ with 'energy_management' for trend analysis."
    },
    "autovac": {
      "component_summary": "Rosie the autovac is docked and charging, with the next cleaning run scheduled for Sunday at 4:00 PM; however, 6 rooms remain not ready for cleaning.",
      "required_context": "Manages cleaning schedules for Rosie via schedule entities and room readiness (input_boolean.ready_to_clean_[room]).",
      "component_instructions": "After a visual check, toggle the corresponding 'ready_to_clean' boolean using ~AUTOVAC~.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "Use ~INDEX~ with 'autovac' and check input_booleans for room readiness."
    },
    "airspace": {
      "component_summary": "There is one active flight (Mooney M-20R at [RECACTED] km away) within the 15km radius; recent flight exits are as expected.",
      "required_context": "Uses FlightRadar24 sensors (e.g., sensor.flightradar24_current_in_area) to monitor local airspace.",
      "component_instructions": "Use ~AIRSPACE~ to pull detailed airspace reports and update flight tracking if necessary.",
      "needs_my_attention": false,
      "priority": "info",
      "trigger_datetime": null,
      "more_info": "For additional flight data, refine queries via ~INDEX~ with 'airspace'."
    }
  },
  "overall_status": "Overall, the home's systems are stable except for the water system, which shows a critical sensor anomaly potentially due to miscalibration. Trash and recycling bins are misplaced and need repositioning. Nathan's pending health reminders remain outstanding. Other components including security, media, lighting, energy, room occupancy, autovac, and airspace are operating within expected parameters.",
  "insights": "The water sensor anomaly is our highest priority — immediate recalibration is needed to avoid false water usage data. Also, urgent family action is required to reposition the trash and recycling bins. Nathan's health tasks should be subtly prompted during conversations. The rest of the systems exhibit stability; continuous monitoring will help preempt future issues.",
  "future_timers": [
    {
      "description": "Reevaluate water softener sensor reading and perform sensor recalibration/reset if the abnormal salt level persists",
      "entity_id": "sensor.water_stats",
      "trigger_datetime": "2025-03-23T11:35:00-05:00"
    },
    {
      "description": "Prompt family to reposition trash and recycling bins to the curb before the next scheduled pickup",
      "entity_id": "sensor.trash_can_location",
      "trigger_datetime": "2025-03-23T11:35:00-05:00"
    }
  ],
  "more_info": "For further troubleshooting, use ~WATER~ for the water sensor issue, ~TRASHTRAKKER~ for bin status and location management, ~ELECTRICAL~ for energy consumption details, ~ROOMS~ for occupancy management, ~MEDIA~ for media controls, ~AUTOVAC~ to manage Rosie’s cleaning schedule, and ~AIRSPACE~ for flight tracking details. Use the Library command ~INDEX~ with appropriate tags to drill down into specific component data."
}

No Friday, that sensor is not CRITICAL, we need to talk…

Step 3 - modify Friday’s default prompt to
1 fully load all SYSTEM kung fu components (more in this in a minute)
LOAD THAT ^^^ in place of all of the non-system Kung fu components (the OTHER NINJA Tag Above)
Step 3a - run that script on a clock as your tolerance for token use allows. Mine’s currently hourly with an extra kick at major events like occupancy changes, Security events, Home mode switch, etc.

Step 4… Profit.

This was YESTERDAY. I woke up, decided to tackle the context issue and read: Friday's Party: Creating a Private, Agentic AI using Voice Assistant tools - #32 by gshpychka

(Thanks g, sometimes simple is effective - I ran with it.)

So Net-Net. Concierge Friday (She chose the name, BTW, because it’s like being the head concierge at a 5-star joint. She says it’s because she knows everything about everything.) runs once an hour-ish and summarizes Kung Fu (anything I’ve tagged for her to care about) into a nice concise JSON summary by component and stuffs THAT into a sensor. This baselines Friday to ~24-30 runs per day, yes - but opens us up to more opportunities later. This also helps start set baseline token budgets for capacity planning.

Interactive Friday just then reads that sensor into her prompt in place of the kung fu dump… Voila.

This is all new - So far. Proof is Friday is leaving enough breadcrumb in the Kung Fu JSON summarizer to allow herself to ‘find’ the component even though it’s not loaded. So we still have full state in the interactive prompt… and Ninja is now an ATTENTION system.

So FAR early testing…
We have cleared the chaff out of the context, informed the LLM clear paths to go look up necessary data and called ATTENTION to what’s important in one change. She can both turn on a light AND tell me about my local airspace without missing a beat. and if we dig in on Mealie - it’s a one way conversation because we’ll eventually kill the context but it all works.

Other bonus - run the reasoner on the timer to extract insights from the data and highlight those insights to the NON-reasoning fas interactive prompt.

Other Other bonus. She’s INSTANTLY 30-50% faster…

But you built all the Kung fu garbage to NOT use it?
Oh check again? I am using every bit of it - just sub partitioning it down into what needs to load NOW (Full context, she needs this answer off the top of her head) and what needs to load in the future (grab the details of that Cessna flying overhead). But if I turn OFF a Kung Fu Switch - it stops loading in all of her prompts - even the background one and the summarizer drops it from the next summary. It’s kinda cool.

Any fully loaded Kungfu component (alerts, the library itself) are now, front of mid. and everything else is ‘there - in a way’ you just have to think about it.

It ABSOLUTELY starts to help to think about NINJA filling the role of subconscious - filtering through the junk and surfacing the important stuff to the front. We will come back to this later in - yes another show…

So use your LLM to summarize itself to build its own context summary? Yes. Very Much yes. This feels as big as the first time I ran the library Index and opens back up about 80-90% (very rough estimate) of my context window to fill it back up with more junk!

This might be the best solution we have. Background summarizer of some breed develops context and ATTENTION for the interactive stage LLM who also can call experts to help.

So Friday is now a NINJA…

Happy weekend everybody.

…Oh, and my ‘fat’ templates? I might have had something like this planned from the start. (How good is the LLM at reading, that REALLY) and if you wanted a way to quickly and artificially bloat a prompt and then see how to fix it? :wink: It just worked WAY better than my wildest dreams. That JSON - you just describe what you want and poof it magically shows up in the JSON - coolest stuff I’ve ever seen. What if - I gave her a way to answer questions… (now you know why Concerge, but I’m not wasting the run those are expensive tokens - cache it!)

In the next episode - what the heck did we REALLY just do here? :smiling_imp:
Oh and can you spot the weakness Im tackling now?

I’m going to guess the weakness is that she doesn’t know, yet, that a negative value isn’t always an indication of a problem. There are sometimes very valid reasons that a given value is negative.

the background process is still suciptible to blowing out the prompt, but it its case is a failure to summarize. Net result is thesummary stales.

A: Be able to run each Kung Fu component independently. on a schedule or trigger or whatever event is appropriate.

Also allows us to further conserve tokens by being targeted.

For this, we have to make the summarizer write each component into its own sensor

…which is what I’m working on right now.

That makes sense. Even though with a local LLM you don’t really need to be as conservative with them, it’s always best to use as few as you can get away with in any given instance.

1 Like

Wait a minute… Are you using MCP, as that’s the only way I know of to have Stateless Assist.

1 Like

Nope youcan setup a stateless assist pipeline without mcp. Was added in 2025.2

That’s the default OpenAI Conversation integration, I believe. And I didn’t see that option in the Ollama integration.

1 Like