02
Module 02 · Sovereign Intelligence Layer

The Vault

Four years of IP. One queryable intelligence layer. This is the most critical build of the sprint — the moat that makes everything else irreplaceable.

Sprint Days
Days 02 — 04
Tools
Obsidian · LlamaIndex · Supabase
Est. Build Time
12 — 15 Hours
Scroll
01
Chapter 01

What the Vault Actually Is

Most people who use AI tools are querying the entire internet. Every answer they get is contaminated by everything the model has ever seen — billions of web pages, other people's ideas, other people's voices, other people's wrong answers. The output is generic by design. It has to be, because it knows nothing specific about you.

The Vault changes that completely. Instead of querying the internet, every AI agent in this ecosystem queries a private, sovereign database built exclusively from four years of Cyber Coastlines IP. Methodology frameworks. Narrative architecture. Brand voice documentation. Course materials. Business SOPs. Every meaningful document ever produced under this signature — ingested, indexed, and made instantly retrievable.

When Claude reasons inside this system, it doesn't free-range. It reads your documents, retrieves the most relevant sections, and reasons exclusively against that context. The output sounds like you because it's built from you. That's not a feature. That's the entire strategic advantage.

The vault is the moat. Nobody can replicate what lives in there because nobody has four years of this specific IP. Every other tool in the stack is replaceable. The vault is not.

Cyber Blueprint · Strategic Principle
3
Tools working together to build the vault
4+
Years of IP being ingested and indexed
6
Content buckets organizing the raw material
0
Third-party clouds that touch your raw IP
02
Chapter 02

The Three Tools

Layer 01 · Raw Content
Obsidian
Local Vault · Encrypted Sync · Source of Truth

Obsidian is where the raw IP lives. Every document, every framework, every piece of content that has ever defined this ecosystem gets organized here in plain markdown files — local on the MacBook, backed up with end-to-end encryption via Obsidian Sync. Nobody at Obsidian can read the content. It never passes through a third-party cloud unencrypted.

Before the sprint begins, the vault content needs to be organized into six buckets: Creative Operations Methodology, Brand & Voice, Causeway Nova narrative assets, The Creative Companion curriculum, Business Operations, and Session Intelligence. This organization is the pre-work. The consultant cannot build the RAG on a pile of unstructured files.

Obsidian is the source of truth. Everything else in the vault architecture is downstream of it.

Layer 02 · Ingestion & Retrieval
LlamaIndex
RAG Framework · Chunking · Embedding · Retrieval

LlamaIndex is the librarian. It reads every document in the Obsidian vault, breaks them into semantically meaningful chunks, converts those chunks into vector embeddings, and stores them in Supabase. When a query comes in — from Claude, from a Lindy agent, from a direct question — LlamaIndex retrieves the most relevant chunks and assembles them into context before the reasoning engine ever sees them.

This is what makes the system intelligent rather than just searchable. A keyword search finds documents that contain the word. LlamaIndex finds documents that contain the meaning — even when the exact words don't match. That distinction is the difference between a search engine and an intelligence layer.

LlamaIndex is open source and free. The consultant should configure the ingestion pipeline to run automatically whenever new content is added to the Obsidian vault — ensuring the intelligence layer stays current without manual intervention.

Layer 03 · Vector Store
Supabase
Vector Database · pgvector · Intelligence Layer Host

Supabase hosts the intelligence layer — the embedded, indexed, queryable version of the vault. Every chunk that LlamaIndex processes gets stored here as a vector embedding alongside its source metadata. When retrieval happens, Supabase performs the vector similarity search that returns the most contextually relevant results in milliseconds.

The hosting decision for Supabase — managed cloud at $25/month or self-hosted on a private VPS — should have been made before the sprint began. If it hasn't been, that decision happens before a single line of vault architecture is written. Both options work. The self-hosted path gives full sovereignty. The managed path gives faster setup. The consultant's recommendation drives this call.

Either way, the vault data belongs to Cyber Coastlines LLC. The embeddings, the indexes, the configuration — all of it lives in GitHub, all of it is reproducible, all of it is owned.

03
Chapter 03

How It Flows

The vault is not a single tool — it's a pipeline. Understanding the flow is as important as understanding the individual components. Here is what happens from the moment content is created to the moment intelligence is returned.

01
Content Created
A session recording, a framework doc, a narrative asset, a SOP — any content produced under the ecosystem signature is the raw input.
↓   saved to vault
02
Obsidian — Raw Storage
Content is organized into the appropriate bucket. Local, private, encrypted. This is the source of truth.
↓   ingestion triggered
03
LlamaIndex — Chunking & Embedding
Documents are chunked into semantic units, converted to vector embeddings, tagged with source metadata.
↓   embeddings stored
04
Supabase — Vector Store
Embeddings and metadata live here. The vault is now queryable in real time.
↓   query received
05
Retrieval — Most Relevant Chunks
LlamaIndex performs similarity search. The most contextually relevant vault content is assembled into a context window.
↓   context passed to reasoning engine
06
Claude API — Reasoning Against Your IP Only
Claude receives only the retrieved vault content. It reasons exclusively against your IP. The output is grounded, specific, and sounds like Cyber Coastlines.
04
Chapter 04

The Six Buckets

The Obsidian vault needs to be organized before the ingestion pipeline can run. These are the six top-level folders. Every piece of existing IP belongs in one of them. If something doesn't fit, it either needs a new bucket — which requires a conversation during the sprint — or it's not ready for the vault yet.

01
Creative Operations Methodology
Every framework, principle, process document, and workflow diagram produced under the CreativeOps discipline. The intellectual core of 0300.ai.
02
Brand & Voice
The Vonn Seacoast design philosophy, tone guides, vocabulary, the "Human-led. Machine-extended." doctrine, aesthetic principles, and visual identity decisions.
03
Causeway Nova
All narrative assets — world-building documentation, character frameworks, plot architecture, the smart city universe lore, and any published or draft story content.
04
The Creative Companion
All curriculum content — module outlines, learning objectives, student-facing frameworks, the seven-stage ascension arc, and immersive experience design documentation.
05
Business Operations
SOPs, entity hierarchy documentation, domain structure, revenue model, decision matrices, and any operational documentation produced during or before the sprint.
06
Session Intelligence
Every significant strategic session, decision log, E-Suite conversation, and creative development record that shaped the ecosystem. This is the institutional memory layer.
05
Chapter 05

Build Checklist

[ ]
Confirm vault hosting decision
Managed Supabase or self-hosted VPS. Lock this before writing a single line of architecture.
[ ]
Audit and organize Obsidian vault into six buckets
Every existing document classified and filed. Nothing left in the root. Naming conventions established.
[ ]
Set up Supabase project and enable pgvector
Database created, pgvector extension enabled, connection credentials documented and stored in GitHub.
[ ]
Configure LlamaIndex ingestion pipeline
Reads from Obsidian vault, chunks by semantic unit, generates embeddings, writes to Supabase. All settings committed to GitHub.
[ ]
Run first full ingestion pass
All six buckets ingested. Embedding count confirmed. No documents skipped or errored.
[ ]
Test retrieval with ten real queries
Queries drawn from actual ecosystem topics — methodology questions, brand voice questions, narrative questions. Results reviewed for relevance and accuracy.
[ ]
Set up automated ingestion trigger
New content added to Obsidian automatically triggers a re-ingestion cycle. The vault stays current without manual intervention.
[ ]
Document the Curator agent handoff protocol
The Curator agent in Lindy will take over maintenance after the sprint. Full documentation of what it monitors, what it flags, and how to resolve common failures.
[ ]
Commit all configuration to GitHub
LlamaIndex config, Supabase schema, ingestion scripts, test query results, and vault architecture documentation. Module 02 is not done until GitHub is current.
Before Leaving This Module
The Vault Must Be Queryable

Module 02 is not complete until the vault returns relevant results on real queries. Not test queries with obvious answers — real questions about the CreativeOps methodology, the Causeway Nova narrative, the Creative Companion curriculum. If the retrieval isn't working accurately, the Reasoning Engine module that follows will be built on a broken foundation. Fix it here. Everything downstream depends on this being right.

End of Module 02
The Vault Is Live.

Module 03 connects the reasoning engine. Claude meets the vault — and everything it produces from this point forward is grounded in four years of your IP.