The repo ships the rules. Veritas proves they held.

AI agents edit code faster than reviewers can track intent. Veritas gives your repo its own map, policy, and evidence layer on top of Kontour Surface — so AI-assisted changes produce a bounded artifact Surface can turn into claims, evidence, freshness, fault lines, and trust reports.

npm install -D @kontourai/veritas
Get Started

The Problem

AI doesn’t know what matters in your repo. Your codebase has load-bearing files, shared contracts, and surfaces that need different kinds of proof. AI agents treat them all the same. Veritas gives the repo a rule surface so agents know what they’re touching and what evidence that area requires.

Rules live as tribal knowledge. Your team’s hard invariants, strong preferences, and temporary guardrails exist in someone’s head — not in a reviewable, enforceable form. Veritas makes them explicit as repo-local rules with real classification and enforcement levels.

Governance rules have no protection. Even when a team writes its rules down, the governance files themselves can still be edited like any other config unless that surface is treated separately. Veritas models governance as its own surface and makes the integrity gap explicit in the roadmap instead of leaving it as hidden repo folklore.

Reviewers scan the whole diff. When AI changes dozens of files, a human reconstructs intent from a raw diff with no structured summary of what was proven or what passed. Veritas generates agent-readable feedback plus a bounded evidence artifact — what changed, what was affected, what proof ran, which policies held, and the surface.input Surface uses for trust reporting.

No way to know if guidance helped. You can add context files and prompt instructions, but there is no feedback loop measuring whether they actually improved outcomes. Veritas captures local improvement records — acceptance rate, time-to-green, override count, reviewer confidence.

Repo Map Adapter

A typed graph of your codebase that names each surface — source, tests, config, migrations — and specifies what proof that surface requires.

AI knows what it is touching and what evidence that area demands before it touches it.

Rules Policy Pack

Staged rules classified as must-hold invariants, strong preferences, or temporary safety rails — not a flat checklist that ages badly.

Reviewers see which rules applied, which passed, and which were waived — in writing, not memory.

Evidence Artifacts

A bounded JSON record of what changed, which repo surfaces were affected, what proof ran, which policies passed or failed, and the Surface TrustInput projection.

A reviewer inspects a focused summary, while Surface receives portable claims, evidence, policies, and events.

Feedback Live Evals

Structured records of whether guidance actually helped: acceptance rate, time-to-green, override frequency, reviewer confidence.

You can tell whether the rules are useful, stale, or actively in the way — before the next sprint, not at the next retrospective.

Before and After

Without Veritas

  • AI agent edits 47 files with no structured guidance surface
  • Reviewer scans the full diff looking for violations they have to know to look for
  • Repo expectations live as tribal memory — undocumented, unenforced, unmeasured
  • Governance files can be weakened like any other config, with no distinct integrity path
  • No way to know whether any guidance you gave the agent actually changed its behavior

With Veritas

  • Repo ships its own map and rules; the agent knows what surfaces it is entering
  • Reviewer inspects a bounded evidence artifact — what changed, what proof ran, what passed
  • Policy results and governance surfaces are explicit in source control, not reconstructed after the fact
  • Live eval record says whether the guidance helped, with numbers

How It Works

Three commands cover the core workflow:

# Bootstrap the adapter, policy pack, and team profile for your repo
npx veritas init

# Emit an evidence artifact for the current working tree
npx veritas report --working-tree

# Run proof, emit lint-style feedback, and draft an eval record in one pass
npx veritas shadow run --working-tree

init writes the starter files to .veritas/ and injects the governance block into AI instruction files. report produces the evidence artifact your CI or PR workflow can post. shadow run adds proof execution, lint-style feedback, and eval drafting on top of that, with no enforcement until you are ready.

Start Safe

Veritas has a three-phase rollout. You pick the phase. You do not have to flip a switch you are not ready for.

Shadow — rules run but nothing is enforced. Evidence and eval drafts are written locally. This is the observation phase: you learn what is noisy, what is missing, and what matters before you commit to any enforcement shape.

Assist — rules start guiding. Operators can waive individual checks. Evidence is posted to PRs. The team gets used to the artifact before it gates anything.

Enforce — rules that have proven stable block violations in CI. The policy pack says which rules are at which phase, so the enforcement surface is explicit and reviewable.

The .veritas/ directory in your repo is the audit trail for all three phases.

Proven on Itself

Veritas runs on its own repository using the same workflow a consumer repo would use. CI runs shadow run on check-ins, posts evidence artifacts to PRs, and tracks health against the live eval records. The .veritas/ directory in this repo is not a demo configuration — it is the actual development workflow.

If self-hosting feels awkward, that is a signal to fix the product surface, not to carve out special behavior for the framework repo.

Learn More