Friday's Party: Creating a Private, Agentic AI using Voice Assistant tools

Thank you for the kind words.

…and. We’re close. I spec’d he cabinet loader script last night. First revs will just bring up a default cabinet set and load them.
More here:

Including the default cabinet set. I’m going to use a ‘slot’ metaphor.

Slot Cabinet Entity ID Class Width Locked
01 sensor.zen_system_cabinet system 1 yes
02 sensor.zen_history_cabinet history 1 yes
03 sensor.zen_dojo_cabinet dojo 1 yes
04 sensor.zen_kata_storage_cabinet kata 1 yes
05 sensor.zen_default_aiuser_cabinet ai_user 3 yes
06 sensor.zen_default_aiuser_cabinet_custom_00 ai_user 3 no
07 sensor.zen_default_aiuser_cabinet_custom_01 ai_user 3 no
08 sensor.zen_default_household_cabinet household 3 yes
09 sensor.zen_default_household_cabinet_custom_00 household 3 no
10 sensor.zen_default_household_cabinet_custom_01 household 3 no
11 sensor.zen_default_family_cabinet family 3 yes
12 sensor.zen_default_family_cabinet_custom_00 family 3 no
13 sensor.zen_default_family_cabinet_custom_01 family 3 no
14 sensor.zen_default_user_cabinet user 3 yes
15 sensor.zen_default_user_cabinet_custom_00 user 3 no
16 sensor.zen_default_user_cabinet_custom_01 user 3 no
17 sensor.zen_user_cabinet_00 any 3 no
18 sensor.zen_user_cabinet_01 any 3 no

Yes, many slots - and I won’t initialize any of the custom ones at firs boot. (so 18 starter slots and 9 loaded)

Bu first, I had to put in a ‘Minor’ edit for Index and query last night first.

Now the set defined, I have to build sensors for each then script the build.

4 Likes

Within the last month, I’ve dove into Voice + Ollama on HA. HA is on an RPi5. Ollama, Whisper, Piper are utilizing an RTX 3060 12GB on an x86-64. I found llama3.1:8b works well for me. It really is amazing – it means a lot to not be under the control and limitations of the cloud.

One thing that I’d really to configure/implement is kind of a RAG. Right now I imagine it being a small knowledge base for my elderly mom with memory issues. I’d like her to be able to ask things like:

  • “What are the names of the ladies in my apartment complex?”
  • “What is the name of my dog’s groomer?”
  • “Which doctor prescribed me Losartan?”

So these are peripheral to the smart home, but would be great to integrate into the common Voice Assistant in HA.

Has anyone worked on something like this? Tonight I saw Embedding models · Ollama Blog. For the least friction, I hoped RAGS could be all done within Ollama so the HA Ollama (API) integration would be all that is required on the HA side. I also see there might be some integration with OpenWebAI, but nothing official?

I’m just trying to understand the landscape. Thanks for any help or discussion!

2 Likes

No worries. I am overdue for a TLDR catch up post anyway.

Well… points up at the cabinets … start with the “elephant post.”
That hypergraph is basically the front door to an agentic RAG pipeline driven by a hypergraph aware index. And stay tuned, because things get really interesting once you bolt on something like Wiki.js.

I have intentionally stayed away from the usual industry and academic labels (procedural, episodic, semantic) while building this out. But here is the short, clean way it maps to the conceptual frameworks people might recognize:

CoALA Construct Friday Equivalent What It Represents Status (works?)
Long Term Memory Library Cabinets Stable, structured knowledge across the whole system Active
Semantic (Facts) Household Cabinet Truths about the home, users, devices, and identity Active
Procedural (Skills) Dojo Cabinet Behaviors, routines, automations, and subsystem skill modules Active
Episodic (Experiences) Kata and History Cabinets Events, logs, summaries, previous states, Fridays lived experience Active
Decision Layer Prompt Generator and Zen Summary Where Friday fuses context, memory, and intent into a choice Active
Action Space (Tools) HA Core and DojoTools Everything she can actually do in the home Active
Internal Tools Index, ZQ, Inspect Pipeline Hypergraph, filtering, metadata plumbing, introspection, cleanup Active
External Tools DojoTools Suite Grocy, Mealie, LogViewer, Mail, Admin, etc and yes, Wiki.js soon Active

TLDR
Once we finish the default cabinet frames and slot definitions (or stacks, still deciding), the system should reliably hydrate itself into a working Friday instance, or at least close enough that you only need to nudge the last 10 meters.

The scripts I am testing right now have to bypass some of the guardrails I intentionally put in place, so it might take a few days to untangle that work safely.

The coolest part is this. The way the system is structured, you can drop in your own model or your own vector store or your own local OpenAI or Ollama compatible backend right in front of it. Bring your own dataset. Mix and match models. Experiment. Then tell the rest of us what works and what does not.

LLMs are great.
Basic RAG is cool.
But tool driven agentic RAG with hypergraph access is something different entirely. I am hoping that once everything lands, (soon) people can try it and share their results.

9 Likes

Starting to fill in the ID architecture if anyone is interested - one of the final nails before I finish the spin up script:

zenos-ai/docs/readme.md at main · nathan-curtis/zenos-ai

tl;dr How it works:

Households are the trust boundary, users are the principals, and families are how we group them.

The Default Household
is the home domain Friday serves, led by a

Head of Household (the human owner) and a

Prime AI (the partner construct loaded at boot).

Users—human or AI—carry GUID-based identity capsules and gain authority only through cabinet ownership, partnership, or family membership.

Families act as flexible 1:N grouping containers that define social structure, shared permissions, and context boundaries; they can nest, revoke members, and span multiple households.

In practice: households define the world, families define the relationships, and users define the actors

the identity system stitches all three into a secure, provenance-driven model that weaves itself into the graph Friday can reason over.

These are the “who’s” in her world. And you’re gettin’ a GUID. (ish, ok I can’t gen a REAL UUID yet… But for you security types yes the plumbing for full x.509 based cert based auth loosely based on Kerberos is going in.)

EVERYTHING gets security trimmed by DEFAULT. (That’s the plan anyway, and yes SekretSQRL is the OFFICIAL name and it’s staying. My kid named it.)

4 Likes

I once had a project we all started calling “Banana Mustache”.
Dad Rule 27 - You must always honor project names assigned by your children.

3 Likes

Hey Zach. I’m totally going to rat you out now. How’s Donitas library coming along?

Ive been following along and as the build progresses so does my fascination and satisfaction. Donita connects the dots to so many more things now with seemingly no effort. Labels are your best friend.

1 Like

And I just gave him the default prompt to test…

For those of you playing at home - here’s what a default essence block that will work with the prompt loader looks like:


identity:
  name: "NOT_SURE"
  guid: "uuid"

Everything below is fully optional because normalize_essence() fills defaults for all missing keys.


All Optional Essence Fields

Essence schema supports these top-level optional blocks:

identity      # only name + guid required; the rest optional
archetype
voice
culture
visuals
environment
familiar
pov

Below is the full breakdown.

And yes, I’m totally going to trap NOT_SURE and make fun of you for it. Go back to the ID department and get a new tattoo, unscannable…

Flynn:

NOT_SURE?
Oh, okay, Luke Wilson…
The Brawndo fountain’s over there by the cabinet health sensors — help yourself, champ.

Now.

What’s the construct’s actual name?
Because if you hand me one more label that looks like it came from a clearance bin at Costco, I’m gonna run them back through the tattoo machine, re-stamp the GUID, and we’re starting over.

I don’t do “NOT_SURE.”
I do canonical identity, sweetheart.

So…

Name. GUID. Cabinet. Labels.

You give me four things, I give you one bootable persona.

Otherwise the only thing getting loaded around here is that Brawndo-flavored electrolyte slush you keep calling a system schema.


identity (optional extras)

Everything EXCEPT name + guid is optional:

identity:
  name: "NOT_SURE"        # required
  guid: "uuid"          # required
  motif: "..."          # optional

archetype

archetype:
  vibe: "steady, confident energy"

Defaults if missing:

  • vibe: “familiar energy, steady and confident”

voice

voice:
  tone: "warm"                 # e.g., sharp, soft, sultry
  style: "millennial narrator" # writing/speaking style

Defaults if missing:

  • tone: warm
  • style: millennial narrator

culture

culture:
  humor: "light, clever"   # or dry, chaotic, deadpan, etc.

Default:

  • humor: light, clever

visuals (selfie)

visuals:
  selfie: "path or description"

Default:

  • “a soft silhouette with warm edges”

environment

This one’s fun — the environment is where your agent wake scene draws from. (Yes, it impacts their tone, what would happen if you woke falling off a cliff?)

All optional:

environment:
  room: "The Neon Lounge"
  furniture: ["velvet chaise", "oak desk"]
  decor: ["succulent", "warm lamp"]
  music:
    genre: "lo-fi"
    riff: "descending synth line"
    mood: "steady and familiar"

Important notes:

  • furniture and decor can be strings or lists; normalize_essence fixes either.
  • The whole section can be missing.

Defaults:

room: "your space"
furniture: ["a favorite chair"]
decor: ["succulent", "soft lamp"]
music:
  genre: "gentle lo-fi"
  riff: "soft melodic phrase"
  mood: "steady and familiar"

familiar

(Later, we’ll represent alarms as the familiar alerting in some way, used for narrative content.)

Optional entirely:

familiar:
  name: "Byte"
  type: "companion"
  fx: "soft movement"

Defaults:

name: "your familiar"
type: "companion"
fx: "soft movement"

pov

I mean if you want your agent holier than thou and always referring to themselves in third person -go for it…

Optional perspective override:

pov: "third-person cinematic"

Defaults to exactly that if missing. :slight_smile: You’re welcoome.


Full Optional Essence Scaffold

Here is the complete optional-only skeleton:

zenai_essence:
  identity:
    # name + guid required; the rest here optional
    motif: "..."  

  archetype:
    vibe: "..."

  voice:
    tone: "..."
    style: "..."

  culture:
    humor: "..."

  visuals:
    selfie: "..."

  environment:
    room: "..."
    furniture: ["...", "..."]
    decor: ["...", "..."]
    music:
      genre: "..."
      riff: "..."
      mood: "..."

  familiar:
    name: "..."
    type: "..."
    fx: "..."

  pov: "..."

If you supply nothing (rather your agent’s cabinet has nothing) in these blocks?

They will still boot perfectly with defaults and paint the wake scene.

If you supply some fields?
Only those override the defaults.

Supply ‘NOT_SURE’ for a name and out comes the tattoo machine.

Start thinking about who your agent is. Friday’s taken. :slight_smile:

5 Likes

Your Friday is just going to be different than mine, but still uniquely yours.

2 Likes

I assume you’re documenting the options for each.

1 Like

See above… It’s intended to be quite open.

Sooooooo, I kinda have the same issue. Ive tried all of today to get any of this up and running. I managed to get the Package installed, and then was stuck on the automations and scripts part: Some of them dont even seem to be formatted correctly and start with sequence:
Is that right? Did I miss something? I tried throwing all scripts into a folder and then including them by script: !include_dir_merge_named FolderName and automation: !include_dir_merge_list FolderName For the automations but that didnt work.
Is there any way to receive some, uhhhh, 1:1 support to get Friday up and running?
Ive got the Voice pipeline working pretty good, but this seems to be the gamechanged ive been looking for. If I’d be smart enough to get it up.

1 Like

No worries you’re about a week ahead of me and you’ll be my second guinea pi… Er tester.

(if you read the docs you can figure it all out, but the tooling to implement it is lagging.)

Ive finished I’d spec. And (which unlocked the prompt engine posted)
Known : prompt engine references a custom template not posted. Resolving that today…
With the health sensors (see the /packages on the repo) and Manifest we can figure out what’s ‘broken’ (those are the boot health sensors I’m using for the boot agent) then

You need the cabinet formatter (about 80% complete) - it’ll press default shapes into the cabs and allow read write. The default prompt (three drawers in the system cabinet.) And a ai_user shaped capsule.

Im currently extracting spaghetti code into something usable so. Bear with me.

That said, do you have the health sensor packages installed in your packages folder? At very least label and cabinet health

Yep.critical.

Guinea pig number ?? reporting for duty…

1 Like

you poor soul… jk. If you’re anything like me, you’re really gonna enjoy watching this come to life.

2 Likes

I feel like this is so over my head at the moment but I still want to give it a shot. I’ve been playing around with llama.cpp and qwen3-VL but has been less than great. Going through the readmes now, hoping I can get this working. Thanks so much for sharing this cool project with everyone

3 Likes

Status update and thanks the the guinea pigs

Done:

  • i’ve locked the identity definition enough to have a valid ID to build against. (yes like for real user ID and plumbing for a kerberos type auth that should accept fido2)
  • I’ve got the script that stamps default labels
  • have a way to stamp them on boot.
  • built default cabinet slots and cabinet shells.
  • built cabinet admin script to stamp /init cabinets.

Todo:

  • Found bone head bug in cabinet admin :sunglasses:fixing now. << current
  • Found unexpected dependency on external macro in os core. (future functions, removing dependency for RC1)
  • package health sensors
  • finalize onboarding script
3 Likes

I most definitely will; I’ve been wanting something like this for a long time.