The vault is built. Now we give it a brain. Claude connects to your IP — and from this point forward, everything it produces is grounded exclusively in four years of your work.
There are two ways to use Claude. Most people use the first way — they open a chat interface, type a question, and get an answer drawn from everything Claude has ever been trained on. The output is competent. It's also generic, because it has no idea who you are or what four years of your creative thinking looks like.
This module sets up the second way. By connecting Claude to the vault through the API, we constrain it deliberately. Claude stops drawing on general knowledge and starts drawing exclusively on retrieved chunks from your IP. Every prompt it receives arrives pre-loaded with the most relevant sections of your methodology, your narrative, your voice. It reasons against that context. Nothing else.
The output stops sounding like the internet and starts sounding like Cyber Coastlines. That's not a stylistic preference — it's a strategic requirement. The E-Suite needs to think and write from inside the ecosystem, not from outside it.
Claude is not the intelligence. The vault is the intelligence. Claude is the reasoning layer that makes the vault speak. That distinction matters for everything we build from here forward.
The Claude API is how the ecosystem communicates with Claude programmatically — not through a chat interface, but through direct API calls that give full control over what Claude receives, how it reasons, and what it returns. This is not the same as using Claude.ai. The API is infrastructure.
The critical privacy distinction: Anthropic does not train on API data by default. Content processed through the API — your vault documents, your prompts, your methodology — does not feed back into Anthropic's training models. Your IP goes in, reasoning comes out, nothing is retained. This is one of the reasons the API was chosen over any consumer-facing AI tool for this ecosystem.
Cost is usage-based — there is no monthly subscription. Budget conservatively at $50–75/month for moderate E-Suite usage. The consultant should architect the retrieval layer efficiently so Claude only receives what it needs. Unnecessary token consumption is an avoidable cost.
The E-Suite is the AI collective that runs this ecosystem. Five agents, each with a defined role, each powered by Claude reasoning exclusively against the vault. They are not chatbots. They are operational staff — built during this sprint, running indefinitely after it ends.
Every E-Suite agent runs on a system prompt — a set of instructions that defines its role, its constraints, its voice, and its relationship to the vault. These are not improvised. They are engineered documents that determine the quality of every output the agent produces for the life of the ecosystem.
Each system prompt has four components. First, a role definition — who this agent is and what it exists to do. Second, a vault instruction — an explicit directive that reasoning must be grounded in retrieved vault content only, never general knowledge. Third, a voice calibration — the tone, register, and specific vocabulary of the Cyber Coastlines ecosystem. Fourth, an output format — what the agent returns and how it structures its response.
All five system prompts get written, tested, refined, and committed to GitHub during this module. They are version-controlled like code because they are code — they are the logic layer of your AI staff.
"A well-engineered system prompt is indistinguishable from a well-trained employee. A poorly engineered one produces outputs that sound like they came from somewhere else entirely. Get these right before anything else runs on top of them."
The E-Suite is not functional until every agent produces outputs that unmistakably sound like Cyber Coastlines. Run each agent against questions that have known answers in the vault — then check whether the output references the right frameworks, uses the right vocabulary, and reflects the right philosophy. If an agent sounds generic, the system prompt isn't calibrated yet. Fix it in this module, not after the system is live.
Module 04 builds the destination — the public-facing home where your audience arrives, reads, and decides to stay.