Documentation

North Star Vision

The long-term product vision and design philosophy behind Conduit.

North Star Vision

Conduit's north star is simple to say and annoyingly hard to build:

Make AI tools feel context-aware without turning your life into a prompt.

Most people meet AI through a chat box. That interface is fine - until the work has real context: dozens of documents, evolving decisions, and constraints you can't hand-wave away. Then you end up doing the least-leveraged job imaginable: copy, paste, summarize, re-explain... and repeat next week.

Conduit exists to take that burden off the user, without asking them to trade away privacy or safety.


The thesis

Conduit is designed around three beliefs:

  1. Context is a dependency. If you can't control it, you can't trust outcomes.
  2. Safety is a product feature, not a footnote. Secure defaults must be the easiest path.
  3. "More context" is not the goal. The goal is the right context, at the right time, with a trail you can audit.

That last one matters more than it sounds. Most AI workflows don't fail because the model is weak - they fail because we're feeding it either too little... or a swamp.


Where we are today (and what's real)

Conduit's v1 shipped reality is focused on the second half of the original vision:

  • A private, local knowledge base you control
  • Exposed to AI tools through MCP (Model Context Protocol)
  • With RAG workflows that help avoid context bloat
  • And optional KAG (Knowledge-Augmented Generation) for structured, multi-hop reasoning

The "connector control plane" vision - discovering, auditing, containerizing, installing, and injecting third-party MCP servers across multiple AI clients - is still the north star. Parts exist conceptually and in design, but it's not the product you should assume is fully delivered yet.

This doc tells you where Conduit is going, and why the architecture is shaped the way it is.


The problem Conduit is actually solving

1) AI tools are stateless

The model doesn't remember your project, your docs, your decisions, or your constraints unless you keep re-feeding them.

2) Uploading everything doesn't scale (and it's risky)

When you throw an entire folder into a cloud AI tool, you get three predictable outcomes:

  • you overshare,
  • you bloat the prompt,
  • and you still don't get the exact slice you needed.

3) Debugging "why the answer is wrong" is painful

RAG failures are often silent: the answer sounds plausible, the citations are weak (or missing), and you're left guessing whether retrieval or reasoning was the culprit.

Conduit approaches this as an engineering problem, not a prompting problem.


The north star user experience

A user should be able to:

  1. Point Conduit at sources they trust (docs, notes, repos, exports)
  2. Build a local index (RAG) and optionally a knowledge graph (KAG)
  3. Connect Conduit to one or more AI tools via MCP
  4. Ask questions naturally
  5. Get back exactly what the AI tool asked for - no more, no less - plus traceability

In other words: Conduit becomes an answer engine for AI tools.


The design philosophy: "Query, don't dump"

The core product stance is:

AI tools should query your context, not ingest your world.

That's the difference between:

  • "here's 30 pages of background, good luck" and
  • "give me the 3 exact facts needed to answer this question, with sources"

This stance shows up everywhere:

  • in retrieval defaults,
  • in the MCP interface design,
  • in how we think about privacy,
  • and in why KAG is optional.

KAG is powerful - and intentionally optional

KAG (Knowledge-Augmented Generation) is not a default because it has real costs:

  • it takes longer to build,
  • it consumes more compute/memory/storage,
  • and it only pays off when your domain benefits from structure.

When KAG is worth it

KAG is justifiable when you need structured, multi-hop, auditable reasoning over a well-defined domain - especially when correctness matters more than convenience.

Common signals:

  • Strong structure and stable ontology

    • Your domain naturally fits entities/relations (drugs-conditions-contraindications, contracts-clauses-jurisdictions, services-owners-dependencies).
    • You're willing to invest in a schema/ontology and canonical IDs.
  • Complex, multi-hop, constraint-heavy queries

    • Questions require chaining facts and constraints (joins, filters, temporal/numeric logic).
    • The path matters - not just "semantic similarity".
  • High demands for accuracy, explainability, auditability

    • Regulated or safety-critical settings where hallucinations are unacceptable.
    • You need repeatable reasoning traces you can inspect and test.
  • Entity-centric, factual workloads

    • "What obligations does counterparty X have under contract Y given jurisdiction Z?"
    • Professional-grade factuality beats open-ended chat.
  • You can amortize the cost

    • The same graph powers multiple agents and surfaces (Q&A, analytics, monitoring, rule-checking).
    • You may already have partial structure (catalogs, lineage, product graphs) and KAG builds on it.

If those conditions don't apply, Conduit's RAG workflows usually get you 80%+ of the value with far less overhead.

Link to the deeper KAG workflows doc: /docs/kag


What "secure by default" means here

Security isn't a slogan; it's a set of design constraints:

  • Your sources stay local unless you explicitly choose otherwise
  • MCP makes integration explicit (tools call Conduit, Conduit responds)
  • You can keep sensitive context out of cloud prompts by default
  • You can audit what's being served and revoke sources

For the longer arc (the connector hub), "secure by default" expands further:

  • third-party connectors treated as untrusted
  • container isolation to bound blast radius
  • explicit permission grants for disk/network access

That part is the future work. The posture is already set.


How this maps to the rest of the docs

If you're here to use Conduit today:

If you're here to understand the "why":

(And this "Design, Philosophy and Architecture" section will grow over time.)


Success metrics (what "better" means)

Conduit is winning if it reduces:

  • repeated context re-explaining
  • context bloat (and the confusion it creates)
  • oversharing to cloud tools
  • time-to-answer for grounded questions

A practical north-star metric: Time-to-grounded answer: from question -> cited answer -> verified in sources.


The long-term bet

The bet is that the next generation of "smart" tools won't be the ones with the fanciest model.

They'll be the ones that:

  • can safely reach the right information,
  • can prove where it came from,
  • and can do it without making the user become a part-time integrator.

That's what Conduit is trying to become: a calm, local control plane for context.


Community and contributions

Conduit is early, and the sharp edges are real. If you try it and find gaps:

  • file issues,
  • suggest workflows,
  • propose docs improvements,
  • or contribute code.

Start here: /docs/community